On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > All attempts by applications to access a DB's tables and columns > directly violates design principles that guard against close-coupling. > This is a basic design tenet for OO. Violating it when jumping from OO > to RDB is, I think, the source of problem that are collectively and > popularly referred to as the object-relational impedance mismatch. I wondered if we might be able to come up with some agreement on what object-relational impedence mismatch actually means. I always thought the mismatch was centred on the issue that a single object != single tuple, but it appears there may be more to it than that. I was hoping perhaps people might be able to offer perspectives on the issues that they have encountered. One thing I would like to avoid (outside of almost flames of course), is the notion that database technology is merely a persistence layer (do people still actually think that?) - I wonder if the 'mismatch' stems from such a perspective.
JOG wrote: > I wondered if we might be able to come up with some agreement on what > object-relational impedence mismatch actually means. I always thought > the mismatch was centred on the issue that a single object != single > tuple, but it appears there may be more to it than that. > The issue as I've discovered it has to do with the fact OO systems are composed of graphs of data and RDBs are two-dimensional. What defines an account in an RDB may be composed of multiple tables. An RDB might express multiple account types through multiple tables where OO may reflect it as multiple classes. Attempts to make RDBs function as graphs through mapping tools results in disappointing performance and, in my experience, too much mapping, too much infrastructure, and too much language/paradigm-specific layers. In short, way more code, way more maintenance, and way more job-security for consultants, pundits, and tool providers. -- Visit <http://blogs.instreamco.com/anything.php> to read my rants on technology and the finance industry. Visit <http://tggagne.blogspot.com/> for politics, society and culture.
"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message news:7vqdnf21dLOnrVHanZ2dnUVZ_tuonZ2d@wideopenwest.com... > JOG wrote: >> I wondered if we might be able to come up with some agreement on what >> object-relational impedence mismatch actually means. I always thought >> the mismatch was centred on the issue that a single object != single >> tuple, but it appears there may be more to it than that. >> > The issue as I've discovered it has to do with the fact OO systems are > composed of graphs of data and RDBs are two-dimensional. RDBs are not two-dimensional, they are n-dimensional. You are confusing the picture of the thing with the thing. I have a three dimensional kitchen table. I have an RDB table with three columns (dimensions) called length, width and height that describes it. > What defines an account in an RDB may be composed of multiple tables. > An RDB might express multiple account types through multiple tables > where OO may reflect it as multiple classes. Attempts to make RDBs > function as graphs through mapping tools results in disappointing > performance and, in my experience, too much mapping, too much > infrastructure, and too much language/paradigm-specific layers. In > short, way more code, way more maintenance, and way more job-security > for consultants, pundits, and tool providers. I completely, 100% agree with that. Code is evil. Roy
"JOG" <jog@cs.nott.ac.uk> wrote in message news:0cd61579-0f26-422c-9aec-908ffdea59ff@i7g2000prf.googlegroups.com... > On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > One thing I would like to avoid > (outside of almost flames of course), is the notion that database > technology is merely a persistence layer (do people still actually > think that?) Are you kidding?!! You will grow old and die before you find someone not on c.d.t. who DOESN'T think that. In the real world you will be thought some kind of simpleton/troll/nutcase if you suggest it isn't just a persistence layer. Roy
"Roy Hann" <specially@processed.almost.meat> wrote in message news:zpSdnSj5fPTYqVHanZ2dneKdnZydnZ2d@pipex.net... > "Thomas Gagne" <tgagne@wide-open-west.com> wrote in message > news:7vqdnf21dLOnrVHanZ2dnUVZ_tuonZ2d@wideopenwest.com... > > JOG wrote: > >> I wondered if we might be able to come up with some agreement on what > >> object-relational impedence mismatch actually means. I always thought > >> the mismatch was centred on the issue that a single object != single > >> tuple, but it appears there may be more to it than that. > >> > > The issue as I've discovered it has to do with the fact OO systems are > > composed of graphs of data and RDBs are two-dimensional. > > RDBs are not two-dimensional, they are n-dimensional. You are confusing the > picture of the thing with the thing. I have a three dimensional kitchen > table. I have an RDB table with three columns (dimensions) called length, > width and height that describes it. Stop! You're both right! There is a certain level of abstraction where and RDB is definitely n-dimensional. This is the level of abstraction where I spend most of my time thinking. So I tend to agree with you, Roy. There is, however, a different level of abstraction where an RDB is two-dimensional. So Tom is not "wrong" all the way. And it may be at that level of abstraction where the OO RM impedance match comes about. > I completely, 100% agree with that. Code is evil. > It appears, from reading c.o., that OO people regard data structures as evil. It sounds like Stalinists versus Trotskyites to me!
JOG wrote: > On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > >>All attempts by applications to access a DB's tables and columns >>directly violates design principles that guard against close-coupling. >>This is a basic design tenet for OO. Violating it when jumping from OO >>to RDB is, I think, the source of problem that are collectively and >>popularly referred to as the object-relational impedance mismatch. > > I wondered if we might be able to come up with some agreement on what > object-relational impedence mismatch actually means. I always thought > the mismatch was centred on the issue that a single object != single > tuple, but it appears there may be more to it than that. > > I was hoping perhaps people might be able to offer perspectives on the > issues that they have encountered. One thing I would like to avoid > (outside of almost flames of course), is the notion that database > technology is merely a persistence layer (do people still actually > think that?) - I wonder if the 'mismatch' stems from such a > perspective. It's pretty obvious to me: object-relational mismatch is to relations as assembler-object mismatch is to objects.
JOG wrote: > On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: >>All attempts by applications to access a DB's tables and columns >>directly violates design principles that guard against close-coupling. >>This is a basic design tenet for OO. Violating it when jumping from OO >>to RDB is, I think, the source of problem that are collectively and >>popularly referred to as the object-relational impedance mismatch. > I wondered if we might be able to come up with some agreement on what > object-relational impedence mismatch actually means.I always thought > the mismatch was centred on the issue that a single object != single > tuple, but it appears there may be more to it than that. Apart from issues such as "joins" etc, there actually isn't a mismatch between OO and Relational at the fundamental level IMHO. > I was hoping perhaps people might be able to offer perspectives on the > issues that they have encountered. Given some entity E = (p1,p2, ... pn) , where p1 etc are the properties of E, OO allows the following : 1. the properties of E could be realised as data values or a computational process 2. in any system, there may be multiple existing implementations for E (each instance of E created using any one of those implementations) An RDB that requires all properties are data values will not satisfy 1. An RDB that allows 1, but forces one universal implementation for E will not satisfy 2. For OO, the big problem is the prog langs themselves. Syntax, semantics, implementation. Assuming there is an RDB that can do 1 and 2 above, how can a specific OO prog lang 'align' its representation of objects to the 'tuple' form that will allow an underlying Relational engine to work its wonders (execution, optimisation etc) ?? Regards, Steven Perryman
![]() |
0 |
![]() |
"David Cressey" <cressey73@verizon.net> wrote in message news:SGWyj.2671$4D2.1906@trndny06... > There is, however, a different level of abstraction where an RDB is > two-dimensional. There is? Are you thinking of report writers and GUI painters? > So Tom is not "wrong" all the way. And it may be at that > level of abstraction where the OO RM impedance match comes about. >> I completely, 100% agree with that. Code is evil. >> > It appears, from reading c.o., that OO people regard data structures as > evil. > > It sounds like Stalinists versus Trotskyites to me! Until I know their reasons for their views on data structures I couldn't say. However I notice that I am surrounded by programmers who consume most the development budget writing code, and when a change request comes along I can accommodate it in the database in minutes and they spend months spewing out more code (sometimes after doing an extensive and expensive impact assessment). Code may not be evil, but it sure has a case to answer. Roy
Roy Hann wrote: > > > Until I know their reasons for their views on data structures I couldn't > say. However I notice that I am surrounded by programmers who consume most > the development budget writing code, and when a change request comes along I > can accommodate it in the database in minutes and they spend months spewing > out more code (sometimes after doing an extensive and expensive impact > assessment). Code may not be evil, but it sure has a case to answer. > Roy, that's a great example of why I advocate for a separation (loosely-coupled) between applications and databases. -- Visit <http://blogs.instreamco.com/anything.php> to read my rants on technology and the finance industry. Visit <http://tggagne.blogspot.com/> for politics, society and culture.
On Mar 3, 10:41=A0am, Bob Badour <bbad...@pei.sympatico.ca> wrote: > JOG wrote: > > On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > > >>All attempts by applications to access a DB's tables and columns > >>directly violates design principles that guard against close-coupling. > >>This is a basic design tenet for OO. =A0Violating it when jumping from O= O > >>to RDB is, I think, the source of problem that are collectively and > >>popularly referred to as the object-relational impedance mismatch. > > > I wondered if we might be able to come up with some agreement on what > > object-relational impedence mismatch actually means. I always thought > > the mismatch was centred on the issue that a single object !=3D single > > tuple, but it appears there may be more to it than that. > > > I was hoping perhaps people might be able to offer perspectives on the > > issues that they have encountered. =A0One thing I would like to avoid > > (outside of almost flames of course), is the notion that database > > technology is merely a persistence layer (do people still actually > > think that?) - I wonder if the 'mismatch' stems from such a > > perspective. > > It's pretty obvious to me: object-relational mismatch is to relations as > assembler-object mismatch is to objects.- Hide quoted text - > > - Show quoted text - Very well put. I'm filing this away in my brain for future reference.
On Mar 3, 11:08=A0am, "Roy Hann" <specia...@processed.almost.meat> wrote: > "David Cressey" <cresse...@verizon.net> wrote in message > > news:SGWyj.2671$4D2.1906@trndny06... > > > There is, however, a different level of abstraction where an RDB is > > two-dimensional. > > There is? =A0Are you thinking of report writers and GUI painters? > > > So Tom is not "wrong" all the way. =A0And it may be at that > > level of abstraction where the OO RM impedance match comes about. > >> I completely, 100% agree with that. =A0Code is evil. > > > It appears, =A0from reading c.o., that OO people regard data structures = as > > evil. > > > It sounds like Stalinists versus Trotskyites to me! > > Until I know their reasons for their views on data structures I couldn't > say. =A0However I notice that I am surrounded by programmers who consume m= ost > the development budget writing code, and when a change request comes along= I > can accommodate it in the database in minutes and they spend months spewin= g > out more code (sometimes after doing an extensive and expensive impact > assessment). =A0Code may not be evil, but it sure has a case to answer. > > Roy My experience is somewhere between 2 and 3 orders of magnitude difference between implementing a business rules change in the db vs. the programming team doing it in OO code.
JOG wrote: > On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > > All attempts by applications to access a DB's tables and columns > > directly violates design principles that guard against close-coupling. > > This is a basic design tenet for OO. Violating it when jumping from OO > > to RDB is, I think, the source of problem that are collectively and > > popularly referred to as the object-relational impedance mismatch. > > I wondered if we might be able to come up with some agreement on what > object-relational impedence mismatch actually means. I always thought > the mismatch was centred on the issue that a single object != single > tuple, but it appears there may be more to it than that. > > I was hoping perhaps people might be able to offer perspectives on the > issues that they have encountered. One thing I would like to avoid > (outside of almost flames of course), is the notion that database > technology is merely a persistence layer (do people still actually > think that?) - I wonder if the 'mismatch' stems from such a > perspective. This came up in a nearby message. I borrowed the following text from wikipedia: Key philosophical differences between the OO and relational models can be summarized as follows: Declarative vs. imperative interfaces -- Relational thinking tends to use data as interfaces, not behavior as interfaces. It thus has a declarative tilt in design philosophy in contrast to OO's behavioral tilt. (Some relational proponents propose using triggers, etc. to provide complex behavior, but this is not a common viewpoint.) Schema bound -- Objects do not have to follow a "parent schema" for which attributes or accessors an object has, while table rows must follow the entity's schema. A given row must belong to one and only one entity. The closest thing in OO is inheritance, but it is generally tree-shaped and optional. A dynamic reformulation of relational theory may solve this, but it is not practical yet. Access rules -- In relational databases, attributes are accessed and altered through predefined relational operators, while OO allows each class to create its own state alteration interface and practices. The "self-handling noun" viewpoint of OO gives independence to each object that the relational model does not permit. This is a "standards versus local freedom" debate. OO tends to argue that relational standards limit expressiveness, while relational proponents suggest the rule adherence allows more abstract math-like reasoning, integrity, and design consistency. Relationship between nouns and actions -- OO encourages a tight association between operations (actions) and the nouns (entities) that the operations operate on. The resulting tightly-bound entity containing both nouns and the operations is usually called a class, or in OO analysis, a concept. Relational designs generally do not assume there is anything natural or logical about such tight associations (outside of relational operators). Uniqueness observation -- Row identities (keys) generally have a text- representable form, but objects do not require an externally-viewable unique identifier. Object identity -- Objects (other than immutable ones) are generally considered to have a unique identity; two objects which happen to have the same state at a given point in time are not considered to be identical. Relations, on the other hand has no inherent concept of this kind of identity. That said, it is a common practice to fabricate "identity" for records in a database through use of globally-unique candidate keys; though many consider this a poor practice for any database record which does not have a one-to-one correspondence with a real world entity. (Relational, like objects, can use domain keys if they exist in the external world for identification purposes). Relational systems strive for "permanent" and inspect-able identification techniques, where-as object identification techniques tend to be transient or situational. Normalization -- Relational normalization practices are often ignored by OO designs. However, this may just be a bad habit instead of a native feature of OO. An alternate view is that a collection of objects, interlinked via pointers of some sort, is equivalent to a network database; which in turn can be viewed as an extremely- denormalized relational database. Schema inheritance -- Most relational databases do not support schema inheritance. Although such a feature could be added in theory to reduce the conflict with OOP, relational proponents are less likely to believe in the utility of hierarchical taxonomies and sub-typing because they tend to view set-based taxonomies or classification systems as more powerful and flexible than trees. OO advocates point out that inheritance/subtyping models need not be limited to trees (though this is a limitation in many popular OO languages such as Java), but non-tree OO solutions are seen as more difficult to formulate than set-based variation-on-a-theme management techniques preferred by relational. At the least, they differ from techniques commonly used in relational algebra. Structure vs. behaviour -- OO primarily focuses on ensuring that the structure of the program is reasonable (maintainable, understandable, extensible, reusable, safe), whereas relational systems focus on what kind of behaviour the resulting run-time system has (efficiency, adaptability, fault-tolerance, liveness, logical integrity, etc.). Object-oriented methods generally assume that the primary user of the object-oriented code and its interfaces are the application developers. In relational systems, the end-users' view of the behaviour of the system is sometimes considered to be more important. However, relational queries and "views" are common techniques to re- represent information in application- or task-specific configurations. Further, relational does not prohibit local or application-specific structures or tables from being created, although many common development tools do not directly provide such a feature, assuming objects will be used instead. This makes it difficult to know whether the stated non-developer perspective of relational is inherent to relational, or merely a product of current practice and tool implementation assumptions. As a result of the object-relational impedance mismatch, it is often argued by partisans on both sides of the debate that the other technology ought to be abandoned or reduced in scope. Some database advocates view traditional "procedural" languages as more compatible with a RDBMS than many OO languages; and/or suggest that a less OO- style ought to be used. (In particular, it is argued that long-lived domain objects in application code ought not to exist; any such objects that do exist should be created when a query is made and disposed of when a transaction or task is complete). On the other hand, many OO advocates argue that more OO-friendly persistence mechanisms, such as OODBMS, ought to be developed and used, and that relational technology ought to be phased out. Of course, it should be pointed out that many (if not most) programmers and DBAs do not hold either of these viewpoints; and view the object-relational impedance mismatch as a mere fact of life that Information Technology has to deal with. (end quote) -T-
David Cressey wrote: > "Roy Hann" <specially@processed.almost.meat> wrote in message > news:zpSdnSj5fPTYqVHanZ2dneKdnZydnZ2d@pipex.net... > >>"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message >>news:7vqdnf21dLOnrVHanZ2dnUVZ_tuonZ2d@wideopenwest.com... >> >>>JOG wrote: >>> >>>>I wondered if we might be able to come up with some agreement on what >>>>object-relational impedence mismatch actually means. I always thought >>>>the mismatch was centred on the issue that a single object != single >>>>tuple, but it appears there may be more to it than that. >>>> >>> >>>The issue as I've discovered it has to do with the fact OO systems are >>>composed of graphs of data and RDBs are two-dimensional. >> >>RDBs are not two-dimensional, they are n-dimensional. You are confusing > > the > >>picture of the thing with the thing. I have a three dimensional kitchen >>table. I have an RDB table with three columns (dimensions) called length, >>width and height that describes it. > > Stop! You're both right! > > There is a certain level of abstraction where and RDB is definitely > n-dimensional. This is the level of abstraction where I spend most of my > time thinking. So I tend to agree with you, Roy. > > There is, however, a different level of abstraction where an RDB is > two-dimensional. So Tom is not "wrong" all the way. And it may be at that > level of abstraction where the OO RM impedance match comes about. David, the flaw in your logic is: At the level of abstraction where an RDB is two-dimensional, OO is uni-dimensional. >>I completely, 100% agree with that. Code is evil. > > It appears, from reading c.o., that OO people regard data structures as > evil. > > It sounds like Stalinists versus Trotskyites to me!
On Mar 3, 9:27 am, "Roy Hann" <specia...@processed.almost.meat> wrote: > "JOG" <j...@cs.nott.ac.uk> wrote in message > > news:0cd61579-0f26-422c-9aec-908ffdea59ff@i7g2000prf.googlegroups.com... > > > On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > > One thing I would like to avoid > > (outside of almost flames of course), is the notion that database > > technology is merely a persistence layer (do people still actually > > think that?) > > Are you kidding?!! You will grow old and die before you find someone not on > c.d.t. who DOESN'T think that. In the real world you will be thought some > kind of simpleton/troll/nutcase if you suggest it isn't just a persistence > layer. > > Roy As the girls say, "It depends on how you use it". It depends on how you use the DB. In Robert Martin's version of the payroll application, the DB is almost reduced to a dumb filing system ("persistence layer") because the app code does all the work. However, in my version: http://www.geocities.com/tablizer/payroll2.htm I *leveraged* the DB so that much if not most of the work is done by the database and queries *instead* of the app code. There's more attribute setup work, but noticeably less app code than Martin's. One "programs" largely by putting attributes in tables instead of writing app code. One can choose to use the features available from RDBMS, or they can choose to manually program it in app code. Your perspective on what DB's are "for" largely depends on which route you take. -T-
On Mon, 03 Mar 2008 17:36:50 GMT, David Cressey wrote: > "Roy Hann" <specially@processed.almost.meat> wrote in message > news:zpSdnSj5fPTYqVHanZ2dneKdnZydnZ2d@pipex.net... >> I completely, 100% agree with that. Code is evil. > > It appears, from reading c.o., that OO people regard data structures as > evil. Right, the structure of data would be too low-level to be able to capture behavior. As in mathematics, in OO the internal structure of objects is irrelevant and when considered, then only as an implementation detail to be abstracted away. OO deals with the structures of sets of objects exposing same behavior and relations between such sets. > It sounds like Stalinists versus Trotskyites to me! Huh, both regarded themselves as true Leninists. No, by this analogy c.d.t. wanted to impose Leninism on us, others who consider Leninism just a silly amateur philosophy, though very wicked when tried in practice. You know the examples, the Gulag, SQL... (:-)) -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
topmind wrote: > JOG wrote: > >>On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: >> >>>All attempts by applications to access a DB's tables and columns >>>directly violates design principles that guard against close-coupling. >>>This is a basic design tenet for OO. Violating it when jumping from OO >>>to RDB is, I think, the source of problem that are collectively and >>>popularly referred to as the object-relational impedance mismatch. >> >>I wondered if we might be able to come up with some agreement on what >>object-relational impedence mismatch actually means. I always thought >>the mismatch was centred on the issue that a single object != single >>tuple, but it appears there may be more to it than that. >> >>I was hoping perhaps people might be able to offer perspectives on the >>issues that they have encountered. One thing I would like to avoid >>(outside of almost flames of course), is the notion that database >>technology is merely a persistence layer (do people still actually >>think that?) - I wonder if the 'mismatch' stems from such a >>perspective. > > This came up in a nearby message. I borrowed the following text from > wikipedia: The text had too many blatant errors to start enumerating them all. The problem with wikipedia is any ignorant fool can just start typing nonsense. Even when one follows the requirements for references to primary sources, the quality of the end product can vary over many orders of magnitude.
On Mar 3, 6:47 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote: > On Mon, 03 Mar 2008 17:36:50 GMT, David Cressey wrote: > > "Roy Hann" <specia...@processed.almost.meat> wrote in message > >news:zpSdnSj5fPTYqVHanZ2dneKdnZydnZ2d@pipex.net... > >> I completely, 100% agree with that. Code is evil. > > > It appears, from reading c.o., that OO people regard data structures as > > evil. > > Right, the structure of data would be too low-level to be able to capture > behavior. As in mathematics, in OO the internal structure of objects is > irrelevant and when considered, then only as an implementation detail to be > abstracted away. OO deals with the structures of sets of objects exposing > same behavior and relations between such sets. > > > It sounds like Stalinists versus Trotskyites to me! > > Huh, both regarded themselves as true Leninists. No, by this analogy c.d.t. > wanted to impose Leninism on us, others who consider Leninism just a silly > amateur philosophy, though very wicked when tried in practice. You know the > examples, the Gulag, SQL... (:-)) If you'd please, I was hoping for reasonable discussion, not flamebait. > > -- > Regards, > Dmitry A. Kazakovhttp://www.dmitry-kazakov.de
On Mar 3, 6:54 pm, Bob Badour <bbad...@pei.sympatico.ca> wrote: > topmind wrote: > > JOG wrote: > > >>On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > > >>>All attempts by applications to access a DB's tables and columns > >>>directly violates design principles that guard against close-coupling. > >>>This is a basic design tenet for OO. Violating it when jumping from OO > >>>to RDB is, I think, the source of problem that are collectively and > >>>popularly referred to as the object-relational impedance mismatch. > > >>I wondered if we might be able to come up with some agreement on what > >>object-relational impedence mismatch actually means. I always thought > >>the mismatch was centred on the issue that a single object != single > >>tuple, but it appears there may be more to it than that. > > >>I was hoping perhaps people might be able to offer perspectives on the > >>issues that they have encountered. One thing I would like to avoid > >>(outside of almost flames of course), is the notion that database > >>technology is merely a persistence layer (do people still actually > >>think that?) - I wonder if the 'mismatch' stems from such a > >>perspective. > > > This came up in a nearby message. I borrowed the following text from > > wikipedia: > > The text had too many blatant errors to start enumerating them all. The > problem with wikipedia is any ignorant fool can just start typing > nonsense. Even when one follows the requirements for references to > primary sources, the quality of the end product can vary over many > orders of magnitude. And yet wikipedia entries are often remarkably half-decent. However not in this case - the entry is a convloluted mess, and actually instigated my question.(I often wonder if it would not be better if opposing views on wikipedia could be treated as separate articles rather than the standard mish mash).
On Mon, 03 Mar 2008 12:07:35 -0500, Thomas Gagne wrote: > JOG wrote: >> I wondered if we might be able to come up with some agreement on what >> object-relational impedence mismatch actually means. I always thought >> the mismatch was centred on the issue that a single object != single >> tuple, but it appears there may be more to it than that. >> > The issue as I've discovered it has to do with the fact OO systems are > composed of graphs of data and RDBs are two-dimensional. > What defines an account in an RDB may be composed of multiple tables. > An RDB might express multiple account types through multiple tables > where OO may reflect it as multiple classes. Attempts to make RDBs > function as graphs through mapping tools results in disappointing > performance and, in my experience, too much mapping, too much > infrastructure, and too much language/paradigm-specific layers. In > short, way more code, way more maintenance, and way more job-security > for consultants, pundits, and tool providers. Certainly RDBs are n-dimensional. The difference is not there. If you wanted to consider the space [of objects], then the difference would be in its the structure and the distance defined there. RA assumes existence of such space and tries to *abstract* any distance away. Clearly any implementation would introduce a distance and that will have a huge impact on the performance of the DB. This includes also the mental pictures of the DB "performing" in the heads of people programming it. Negative/positive impacts do not necessarily correlate, which does not make things any better. OO clusters the space into isolated multidimensional planes and *publishes* the distances and orders there. This allows a far better modeling of the problems which space differ from one described by RA. This also includes larger problems which can be described by RA but cannot be modeled by it due to hardware limitations (finiteness etc). Disclaimer: "a real FORTRAN program" can be written in any language under any paradigm etc. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
Dmitry A. Kazakov wrote: > On Mon, 03 Mar 2008 17:36:50 GMT, David Cressey wrote: > > > "Roy Hann" <specially@processed.almost.meat> wrote in message > > news:zpSdnSj5fPTYqVHanZ2dneKdnZydnZ2d@pipex.net... > > >> I completely, 100% agree with that. Code is evil. > > > > It appears, from reading c.o., that OO people regard data structures as > > evil. > > Right, the structure of data would be too low-level to be able to capture > behavior. As in mathematics, in OO the internal structure of objects is > irrelevant and when considered, then only as an implementation detail to be > abstracted away. OO deals with the structures of sets of objects exposing > same behavior and relations between such sets. This is misleading. An association between object A and object B does NOT go away just because it is managed via accessors. OOP not only doesn't get one away from dealing with things as "structures", but uses structures that were discredited in late 60's. Hiding behind accessors is merely a shell game. setMess and getMess is *still* a mess. > Regards, > Dmitry A. Kazakov > http://www.dmitry-kazakov.de -T-
Responding to the subject line: It's an "impedance mismatch". And it's not. OO and declarative, relational, etc. are just "orthogonal". That's a Good Thing.
On Mon, 3 Mar 2008 11:44:07 -0800 (PST), topmind wrote: > Dmitry A. Kazakov wrote: >> On Mon, 03 Mar 2008 17:36:50 GMT, David Cressey wrote: >> >>> "Roy Hann" <specially@processed.almost.meat> wrote in message >>> news:zpSdnSj5fPTYqVHanZ2dneKdnZydnZ2d@pipex.net... >> >>>> I completely, 100% agree with that. Code is evil. >>> >>> It appears, from reading c.o., that OO people regard data structures as >>> evil. >> >> Right, the structure of data would be too low-level to be able to capture >> behavior. As in mathematics, in OO the internal structure of objects is >> irrelevant and when considered, then only as an implementation detail to be >> abstracted away. OO deals with the structures of sets of objects exposing >> same behavior and relations between such sets. > > This is misleading. An association between object A and object B does > NOT go away just because it is managed via accessors. Associations are not managed via links. There are no strings tying the numbers 1 (object A) and 2 (object B). But I guess you probably meant a data structure called "linked list". Please note: _data structure_. Watch out, you fall into heresy, dear topmind! > OOP not only > doesn't get one away from dealing with things as "structures", but > uses structures that were discredited in late 60's. How can anybody discredit a data structure? Recently I was taught by our c.d.t. colleges that data are recorded facts. You didn't object them. So, may I humble ask you, who and how could discredit facts? (outside Usenet discussion forums, I mean... (:-)) -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On 2008-03-03 10:52:10 -0600, JOG <jog@cs.nott.ac.uk> said: > On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: >> All attempts by applications to access a DB's tables and columns >> directly violates design principles that guard against close-coupling. >> This is a basic design tenet for OO. Violating it when jumping from OO >> to RDB is, I think, the source of problem that are collectively and >> popularly referred to as the object-relational impedance mismatch. > > I wondered if we might be able to come up with some agreement on what > object-relational impedence mismatch actually means. I always thought > the mismatch was centred on the issue that a single object != single > tuple, but it appears there may be more to it than that. There is indeed more to it than that. OO and RDB are both strategies for partitioning data. However, the motivation behind the partitioning is completely different. OO partitions data based on the way a particular application will process that data. RDBs partition data based on how many different applications will need to access that data. This really isn't an impedance mismatch. It's a mismatch of intent. When designers use each strategy for what it's good at, there is no mismatch at all. > One thing I would like to avoid > (outside of almost flames of course), is the notion that database > technology is merely a persistence layer (do people still actually > think that?) - I wonder if the 'mismatch' stems from such a > perspective. Take out the word "merely", and recognize that "persistence" is more than just storage. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On Mar 3, 1:11 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote: > On Mon, 3 Mar 2008 11:44:07 -0800 (PST), topmind wrote: > > Dmitry A. Kazakov wrote: > >> On Mon, 03 Mar 2008 17:36:50 GMT, David Cressey wrote: > > >>> "Roy Hann" <specia...@processed.almost.meat> wrote in message > >>>news:zpSdnSj5fPTYqVHanZ2dneKdnZydnZ2d@pipex.net... > > >>>> I completely, 100% agree with that. Code is evil. > > >>> It appears, from reading c.o., that OO people regard data structures as > >>> evil. > > >> Right, the structure of data would be too low-level to be able to capture > >> behavior. As in mathematics, in OO the internal structure of objects is > >> irrelevant and when considered, then only as an implementation detail to be > >> abstracted away. OO deals with the structures of sets of objects exposing > >> same behavior and relations between such sets. > > > This is misleading. An association between object A and object B does > > NOT go away just because it is managed via accessors. > > Associations are not managed via links. There are no strings tying the > numbers 1 (object A) and 2 (object B). > > But I guess you probably meant a data structure called "linked list". > Please note: _data structure_. Watch out, you fall into heresy, dear > topmind! I did not say that *all* objects have associations between them. In practice, there are a lot of associations between objects. In memory these become a graph of object pointers. Even in UML, relationship diagrams are common. If all that could be magically encapsulated under the carpet so that nobody had to worry about them, then why have relationship diagrams? If one doesn't focus on such, then duplication (among other problems) slips in. > > > OOP not only > > doesn't get one away from dealing with things as "structures", but > > uses structures that were discredited in late 60's. > > How can anybody discredit a data structure? Recently I was taught by our > c.d.t. colleges that data are recorded facts. You didn't object them. So, > may I humble ask you, who and how could discredit facts? (outside Usenet > discussion forums, I mean... (:-)) Navigational structures are unwieldy on a larger scale for most mortals. Dr. Codd introduced relational to reign in the chaos of the pasta machines that were growing around that time. OOP didn't fix the problems with navigational structures. Putting accessors around them was not a fix. A developer/designer must still know and manage relationships between objects. > > -- > Regards, > Dmitry A. Kazakovhttp://www.dmitry-kazakov.de -T-
On 2008-03-03 11:24:20 -0600, "Roy Hann" <specially@processed.almost.meat> said: > Code is evil. SQL is code. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 11:36:50 -0600, "David Cressey" <cressey73@verizon.net> said: > There is, however, a different level of abstraction where an RDB is > two-dimensional. So Tom is not "wrong" all the way. And it may be at that > level of abstraction where the OO RM impedance match comes about. I don't know. Computer memory is one-dimensional. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 12:08:39 -0600, "Roy Hann" <specially@processed.almost.meat> said: > However I notice that I am surrounded by programmers who consume most > the development budget writing code, and when a change request comes along I > can accommodate it in the database in minutes and they spend months spewing > out more code (sometimes after doing an extensive and expensive impact > assessment). Code may not be evil, but it sure has a case to answer. Silly developers and DBAs always have a case to answer. Their tools are innocent. BTW, "minutes"? "weeks"? Why didn't you make the change in "minutes" and demonstrate it while everybody else was still working on the impact analysis? -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On Mar 3, 10:54 am, Bob Badour <bbad...@pei.sympatico.ca> wrote: > topmind wrote: > > JOG wrote: > > >>On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > > >>>All attempts by applications to access a DB's tables and columns > >>>directly violates design principles that guard against close-coupling. > >>>This is a basic design tenet for OO. Violating it when jumping from OO > >>>to RDB is, I think, the source of problem that are collectively and > >>>popularly referred to as the object-relational impedance mismatch. > > >>I wondered if we might be able to come up with some agreement on what > >>object-relational impedence mismatch actually means. I always thought > >>the mismatch was centred on the issue that a single object != single > >>tuple, but it appears there may be more to it than that. > > >>I was hoping perhaps people might be able to offer perspectives on the > >>issues that they have encountered. One thing I would like to avoid > >>(outside of almost flames of course), is the notion that database > >>technology is merely a persistence layer (do people still actually > >>think that?) - I wonder if the 'mismatch' stems from such a > >>perspective. > > > This came up in a nearby message. I borrowed the following text from > > wikipedia: > > The text had too many blatant errors to start enumerating them all. Most of them are statements about philosophy or practice rather than absolutes; thus its hard for them to be objectively or "blatantly" wrong. Whether that's a good thing or not is another issue. I see the list as a starting point for discussion even if it does not settle everything. It brings up interesting questions, such as why not have schema inheritance? If inheritance is good or OO, why is it not good for relational schema's? The answer is that OO and relational approach things differently. > The > problem with wikipedia is any ignorant fool can just start typing > nonsense. Even when one follows the requirements for references to > primary sources, the quality of the end product can vary over many > orders of magnitude. -T-
On 2008-03-03 12:29:02 -0600, TroyK <cs_troyk@juno.com> said: > My experience is somewhere between 2 and 3 orders of magnitude > difference between implementing a business rules change in the db vs. > the programming team doing it in OO code. Then you should be able to fly rings around the programmers and get them all fired. Why haven't you? Ladies and gentlemen, there are certainly tasks that are better suited to SQL and stored procedures. There are other tasks that are better suited to general purpose languages. True wisdom comes from knowing the strengths and weaknesses of both. Good architects build systems that combine the tools synergistically. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 11:36:50 -0600, "David Cressey" <cressey73@verizon.net> said: > It appears, from reading c.o., that OO people regard data structures as > evil. Not at all. Data structures are not evil, they just aren't objects. Objects expose behaviors and hide their data. Data structures expose their data and have no behavior. So the two are in almost diametric opposition. Moreover, software that uses data structures is easy to add new functions to, but hard to add new data to. On the other hand, software that uses objects is easy to add new objects to but hard to add new functions to. These two different affordances are the tools that a good architect will use to construct systems that are easy to change. The architect will use data structures in those areas where new functions are likely to be added, and will use objects in those areas where new data is likely to be added. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 13:44:07 -0600, topmind <topmind@technologist.com> said: >> Right, the structure of data would be too low-level to be able to capture >> behavior. As in mathematics, in OO the internal structure of objects is >> irrelevant and when considered, then only as an implementation detail to be >> abstracted away. OO deals with the structures of sets of objects exposing >> same behavior and relations between such sets. > > This is misleading. An association between object A and object B does > NOT go away just because it is managed via accessors. OOP not only > doesn't get one away from dealing with things as "structures", but > uses structures that were discredited in late 60's. > > Hiding behind accessors is merely a shell game. setMess and getMess is > *still* a mess. Bryce, he wasn't talking about setters and getters. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On Mar 3, 9:50 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-03 10:52:10 -0600, JOG <j...@cs.nott.ac.uk> said: > > > On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > >> All attempts by applications to access a DB's tables and columns > >> directly violates design principles that guard against close-coupling. > >> This is a basic design tenet for OO. Violating it when jumping from OO > >> to RDB is, I think, the source of problem that are collectively and > >> popularly referred to as the object-relational impedance mismatch. > > > I wondered if we might be able to come up with some agreement on what > > object-relational impedence mismatch actually means. I always thought > > the mismatch was centred on the issue that a single object != single > > tuple, but it appears there may be more to it than that. > > There is indeed more to it than that. OO and RDB are both strategies > for partitioning data. However, the motivation behind the partitioning > is completely different. OO partitions data based on the way a > particular application will process that data. Is it really as clean cut as that? A single application may be required to process data in several ways - and some may be initially unforseen, as requirements (inevitably) may change? > RDBs partition data > based on how many different applications will need to access that data. ....or based on the different ways a single application will need to process the data, no? > > This really isn't an impedance mismatch. It's a mismatch of intent. > When designers use each strategy for what it's good at, there is no > mismatch at all. > > > One thing I would like to avoid > > (outside of almost flames of course), is the notion that database > > technology is merely a persistence layer (do people still actually > > think that?) - I wonder if the 'mismatch' stems from such a > > perspective. > > Take out the word "merely", and recognize that "persistence" is more > than just storage. Perhaps you could expand? I was referring to the fact that databases do more than 'persist' objects. > > -- > Robert C. Martin (Uncle Bob) | email: uncle...@objectmentor.com > Object Mentor Inc. | blog: www.butunclebob.com > The Agile Transition Experts | web: www.objectmentor.com > 800-338-6716 |
On 2008-03-03 15:11:19 -0600, "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> said: > On Mon, 3 Mar 2008 11:44:07 -0800 (PST), topmind wrote: > >> OOP not only >> doesn't get one away from dealing with things as "structures", but >> uses structures that were discredited in late 60's. > > How can anybody discredit a data structure? Recently I was taught by our > c.d.t. colleges that data are recorded facts. You didn't object them. So, > may I humble ask you, who and how could discredit facts? (outside Usenet > discussion forums, I mean... (:-)) Bryce is all hot and bothered about the vague similarity between network databases (which he asserts were discredited even though they remain the heart of nearly all directory systems and REST hierarchies) and object graphs. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
topmind wrote: > On Mar 3, 10:54 am, Bob Badour <bbad...@pei.sympatico.ca> wrote: > >>topmind wrote: >> >>>JOG wrote: >> >>>>On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: >> >>>>>All attempts by applications to access a DB's tables and columns >>>>>directly violates design principles that guard against close-coupling. >>>>>This is a basic design tenet for OO. Violating it when jumping from OO >>>>>to RDB is, I think, the source of problem that are collectively and >>>>>popularly referred to as the object-relational impedance mismatch. >> >>>>I wondered if we might be able to come up with some agreement on what >>>>object-relational impedence mismatch actually means. I always thought >>>>the mismatch was centred on the issue that a single object != single >>>>tuple, but it appears there may be more to it than that. >> >>>>I was hoping perhaps people might be able to offer perspectives on the >>>>issues that they have encountered. One thing I would like to avoid >>>>(outside of almost flames of course), is the notion that database >>>>technology is merely a persistence layer (do people still actually >>>>think that?) - I wonder if the 'mismatch' stems from such a >>>>perspective. >> >>>This came up in a nearby message. I borrowed the following text from >>>wikipedia: >> >>The text had too many blatant errors to start enumerating them all. > > Most of them are statements about philosophy or practice rather than > absolutes; thus its hard for them to be objectively or "blatantly" > wrong. Whether that's a good thing or not is another issue. I see the > list as a starting point for discussion even if it does not settle > everything. > > It brings up interesting questions, such as why not have schema > inheritance? If inheritance is good or OO, why is it not good for > relational schema's? The answer is that OO and relational approach > things differently. Your question presupposes that inheritance is good for OO.
"Roy Hann" <specially@processed.almost.meat> wrote in message news:nfWdnW05its1o1HanZ2dnUVZ8t2snZ2d@pipex.net... > Until I know their reasons for their views on data structures I couldn't > say. However I notice that I am surrounded by programmers who consume most > the development budget writing code, and when a change request comes along I > can accommodate it in the database in minutes and they spend months spewing > out more code (sometimes after doing an extensive and expensive impact > assessment). Code may not be evil, but it sure has a case to answer. I have to agree with this comment, based on my own experience. *However*, my experience, and perhaps yours as well, may be biased towards cases in business where relatively rich data is subjected to fairly straightforward transformations. A lot of commercial data processing fits that description.
"Robert Martin" <unclebob@objectmentor.com> wrote in message news:2008030315531543042-unclebob@objectmentorcom... > On 2008-03-03 11:24:20 -0600, "Roy Hann" <specially@processed.almost.meat> > said: > >> Code is evil. > > SQL is code. SQL is a botch. It is vile rather than evil. :-) Roy
"Bob Badour" <bbadour@pei.sympatico.ca> wrote in message news:47cc383f$0$4041$9a566e8b@news.aliant.net... > It's pretty obvious to me: object-relational mismatch is to relations as > assembler-object mismatch is to objects. I didn't get this comment. Now that someone else has flagged it as a keeper, I feel the need to ask for an explanation.
"Bob Badour" <bbadour@pei.sympatico.ca> wrote in message news:47cc4766$0$4037$9a566e8b@news.aliant.net... > David Cressey wrote: > > "Roy Hann" <specially@processed.almost.meat> wrote in message > > news:zpSdnSj5fPTYqVHanZ2dneKdnZydnZ2d@pipex.net... > > > >>"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message > >>news:7vqdnf21dLOnrVHanZ2dnUVZ_tuonZ2d@wideopenwest.com... > >> > >>>JOG wrote: > >>> > >>>>I wondered if we might be able to come up with some agreement on what > >>>>object-relational impedence mismatch actually means. I always thought > >>>>the mismatch was centred on the issue that a single object != single > >>>>tuple, but it appears there may be more to it than that. > >>>> > >>> > >>>The issue as I've discovered it has to do with the fact OO systems are > >>>composed of graphs of data and RDBs are two-dimensional. > >> > >>RDBs are not two-dimensional, they are n-dimensional. You are confusing > > > > the > > > >>picture of the thing with the thing. I have a three dimensional kitchen > >>table. I have an RDB table with three columns (dimensions) called length, > >>width and height that describes it. > > > > Stop! You're both right! > > > > There is a certain level of abstraction where and RDB is definitely > > n-dimensional. This is the level of abstraction where I spend most of my > > time thinking. So I tend to agree with you, Roy. > > > > There is, however, a different level of abstraction where an RDB is > > two-dimensional. So Tom is not "wrong" all the way. And it may be at that > > level of abstraction where the OO RM impedance match comes about. > > David, the flaw in your logic is: At the level of abstraction where an > RDB is two-dimensional, OO is uni-dimensional. > That would certainly explain the impedance mismatch!
"Robert Martin" <unclebob@objectmentor.com> wrote in message news:2008030315573622503-unclebob@objectmentorcom... > On 2008-03-03 12:08:39 -0600, "Roy Hann" <specially@processed.almost.meat> > said: > >> However I notice that I am surrounded by programmers who consume most >> the development budget writing code, and when a change request comes >> along I >> can accommodate it in the database in minutes and they spend months >> spewing >> out more code (sometimes after doing an extensive and expensive impact >> assessment). Code may not be evil, but it sure has a case to answer. > > Silly developers and DBAs always have a case to answer. Their tools are > innocent. I admit I am just being provocative, so I probably deserve a baffling non sequiter as a response. However you seem to be claiming that code doesn't routinely take weeks and months to write (which is what I assume you mean when you say the tools are innocent). That is nonsense, of course. > BTW, "minutes"? "weeks"? Why didn't you make the change in "minutes" and > demonstrate it while everybody else was still working on the impact > analysis? Well if I had something better than an improvised ad hoc language that has its roots in the late 1960s and early 1970s, which offers me a collection of datatypes barely richer than those of FORTRAN IV, I might have done. But as long as most people think writing procedural code at the speed of human fingers is all we can reasonably aspire to, I won't be getting anything better to work with. Roy
On 2008-03-03 12:46:09 -0600, topmind <topmind@technologist.com> said: > It depends on how you use the DB. In Robert Martin's version of the > payroll application, the DB is almost reduced to a dumb filing system > ("persistence layer") because the app code does all the work. However, > in my version: > > http://www.geocities.com/tablizer/payroll2.htm > > I *leveraged* the DB so that much if not most of the work is done by > the database and queries *instead* of the app code. Well, that might be a bit of an exaggeration. Here's just one part of the code in your example: <cffunction name="printStubs"> <cfquery name="stubQry" datasource="#dsn#"> SELECT * FROM payStubs, employees, payItems WHERE empRef = empID AND payItemRef = payItemID ORDER BY payDate, empRef, lineNum </cfquery> <!--- <cfdump var="#stubQry#"> [for debugging] ---> <h3>Pay Stub Samples</h3> <cfset shadeClr = "##f0f0f0"> <cfoutput query="stubQry" group="empID"> <!--- outer group by empID ---> <cfset sumPay = 0> <!--- init ---> <hr> <table border=1 cellpadding=2 cellspacing=0> <tr> <td bgcolor="#shadeClr#" width="25%">Name:</td> <td colspan=3>#stubQry.lastName#, #stubQry.firstName# #stubQry.middle#</td> </tr> <tr> <td bgcolor="#shadeClr#">Empl. ID:</td> <td colspan=3>#stubQry.empID#</td> </tr> <tr> <td bgcolor="#shadeClr#">Payroll Date:</td> <td colspan=3>#stubQry.payDate#</td> </tr> <!--- column headings ---> <tr bgcolor="#shadeClr#"> <th colspan=2>Item<br>Description</th> <th>Reference<br>Value</th> <th>Pay<br>Amount</th> </tr> <!--- for each line item ---> <cfoutput> <!--- skip line if zero-amount suppression is on ---> <cfif Not (stubQry.displayOptions Contains '(supprzero)' And stubQry.amount EQ 0.00)> <tr> <td colspan=2>#stubQry.lineDescript#</td> <td align="right"> <cfif stubQry.sumMult is 0> #stubQry.referenceAmt# <cfelse> </cfif> </td> <td align="right"> <cfif stubQry.sumMult NEQ 0> #numberFormat(stubQry.amount,"-9,999,999,999.99")# <cfelse> </cfif> </td> </tr> </cfif> <cfset sumPay = sumPay + stubQry.amount> </cfoutput> <!--- Total Line ---> <tr> <td colspan=3 align="right"><b>Total</b></td> <td align="right">#numberFormat(sumPay,"-9,999,999,999.99")#</td> </tr> </table> </cfoutput> </cffunction> That's a lot of processing code. However, I think your point is not completely invalid. You *did* use more database facilities than I did. Was your code smaller or better? From the above I would say that it was not better. In the C++ example from my book "Agile Software Development, Principles, Patterns, and Practices" you won't see any methods that are even a quarter as long as what you have there. As for whether it was smaller, it's hard to say because the two programs don't do very similar things. You used a very different set of requirements than I did. Still, your point is valid in that I purposely pulled out any kind of SQL from my example, and made use of C++ code rather than database tools -- to the extent that I even did a linear search through all employees rather than using the database to efficiently query them. This was done on purpose because the book is an exposition on OO design principles as opposed to database principles. You might complain that a book about writing good software *should* have database concepts mixed in with it. I sympathize, but authors must make choices, just as you did in your example. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
"Robert Martin" <unclebob@objectmentor.com> wrote in message news:2008030316112582327-unclebob@objectmentorcom... > Ladies and gentlemen, there are certainly tasks that are better suited > to SQL and stored procedures. There are other tasks that are better > suited to general purpose languages. True wisdom comes from knowing > the strengths and weaknesses of both. Good architects build systems > that combine the tools synergistically. Agreed. But often the suitability of a given task has less to do with the language than it does with the domain of responsibilities of either data managers or code managers. Both SQL and C++ are straw men in the sense that there are things that could be done quite differently if one were to reimplement from scratch, know what we now know. I think it's relatively straight forward to extend a language like SQL so that it becomes a general purpose language. (Whether it becomes a good one or a bad one is a matter for further discussion). It doesn't seem as straightforward to me to extend a language like C++ so that it becomes suitable for declaring relational transformations on data. If it is straight forward, then I'd like to hear from people who are doing it. But the idea of a single language that is suitable for everything remains an elusive goal, and probably an unproductive endeavor.
"Robert Martin" > Moreover, software that uses data structures is easy to add new > functions to, but hard to add new data to. On the other hand, software > that uses objects is easy to add new objects to but hard to add new > functions to. > > These two different affordances are the tools that a good architect > will use to construct systems that are easy to change. The architect > will use data structures in those areas where new functions are likely > to be added, and will use objects in those areas where new data is > likely to be added. > I honestly think the above undervalues data independence. It turns out to be surprisingly easy to to add new data to a well designed database with minimal impact on existing programs that access the data. The impact on programs that provide data is greater, but still quite manageable.
Roy Hann wrote: > "Thomas Gagne" <tgagne@wide-open-west.com> wrote in message > news:7vqdnf21dLOnrVHanZ2dnUVZ_tuonZ2d@wideopenwest.com... > >>JOG wrote: >> >>>I wondered if we might be able to come up with some agreement on what >>>object-relational impedence mismatch actually means. I always thought >>>the mismatch was centred on the issue that a single object != single >>>tuple, but it appears there may be more to it than that. >> >>The issue as I've discovered it has to do with the fact OO systems are >>composed of graphs of data and RDBs are two-dimensional. That is a remarkably uninformed and ill-conceived sentence. It's rather like comparing boats and cars saying boats have hulls and cars are pretty. Regardless of truth, the comparison is pointless on its face. One can represent any graph on two-dimensional media just as one can represent any relation on two-dimensional media. In fact, since a graph is merely a set of vertices and a set of directed edges and since one can easily represent vertices and directed edges as tuples, one can easily represent any graph using relations. The Great Debate was had about 3 decades ago and graphs lost. Nothing has happened in the meantime to improve the outcome in the favour of graphs. > RDBs are not two-dimensional, they are n-dimensional. You are confusing the > picture of the thing with the thing. I have a three dimensional kitchen > table. I have an RDB table with three columns (dimensions) called length, > width and height that describes it. > > >>What defines an account in an RDB may be composed of multiple tables. >>An RDB might express multiple account types through multiple tables >>where OO may reflect it as multiple classes. Attempts to make RDBs >>function as graphs through mapping tools results in disappointing >>performance and, in my experience, too much mapping, too much What moron would want to make an RDB function as graphs in the first place? We have known for 3 decades that graphs, themselves, result in disappointing performance and too much mapping. After all, every time one adds a new structural element, the design choices increase geometrically with no logical justification for choosing among them. This leads people to discard logical independence and invent physical reasons. The arbitrary design choices then lead to query bias that effectively makes large numbers of useful queries prohibitively expensive to write or to execute. >>infrastructure, and too much language/paradigm-specific layers. In >>short, way more code, way more maintenance, and way more job-security >>for consultants, pundits, and tool providers. > > I completely, 100% agree with that. Code is evil. > > Roy
On 2008-03-03, Robert Martin <unclebob@objectmentor.com> wrote: > > There is indeed more to it than that. OO and RDB are both strategies > for partitioning data. However, the motivation behind the partitioning > is completely different. OO partitions data based on the way a > particular application will process that data. RDBs partition data > based on how many different applications will need to access that data. > No, RDBs partition data so that it is sensibly and easily available to any possible application. So if you use OO you are saying "there will never be any other application that will need my data". E
On Mar 3, 9:57 pm, topmind <topm...@technologist.com> wrote: > On Mar 3, 10:54 am, Bob Badour <bbad...@pei.sympatico.ca> wrote: > > > > > topmind wrote: > > > JOG wrote: > > > >>On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > > > >>>All attempts by applications to access a DB's tables and columns > > >>>directly violates design principles that guard against close-coupling. > > >>>This is a basic design tenet for OO. Violating it when jumping from OO > > >>>to RDB is, I think, the source of problem that are collectively and > > >>>popularly referred to as the object-relational impedance mismatch. > > > >>I wondered if we might be able to come up with some agreement on what > > >>object-relational impedence mismatch actually means. I always thought > > >>the mismatch was centred on the issue that a single object != single > > >>tuple, but it appears there may be more to it than that. > > > >>I was hoping perhaps people might be able to offer perspectives on the > > >>issues that they have encountered. One thing I would like to avoid > > >>(outside of almost flames of course), is the notion that database > > >>technology is merely a persistence layer (do people still actually > > >>think that?) - I wonder if the 'mismatch' stems from such a > > >>perspective. > > > > This came up in a nearby message. I borrowed the following text from > > > wikipedia: > > > The text had too many blatant errors to start enumerating them all. > > Most of them are statements about philosophy or practice rather than > absolutes; thus its hard for them to be objectively or "blatantly" > wrong. Whether that's a good thing or not is another issue. I see the > list as a starting point for discussion even if it does not settle > everything. > > It brings up interesting questions, such as why not have schema > inheritance? If inheritance is good or OO, why is it not good for > relational schema's? The answer is that OO and relational approach > things differently. This reminds me of a serious 'click' moment I had with data structures. A long time ago, in a galaxy far far away, I once sat blindly reinventing my own network model, with identifiers, pointers, types, and all the fun of the fair. As an OO programmer it was the only mindset I had. Then I considered encoding data similar to the classic example of: * Aristotle is a man * All men are mortal * |= Aristotle is mortal Well, /clearly/ what I was dealing with here was a generic entity class, of which Man was a subclass, and Aristotle an instance. Something like: class Entity { boolean mortal; string name; Entity(_name, _mortal) : name(_name), mortal(_mortal); }; class Man : public Entity { date bday; Man(_name, _mortal) : Entity(_name, _mortal), bday(_bday); }; But, as I extended the example, the structures got more convoluted, and the result more and more of a mess. Finally something clicked. It wasn't about types, objects or inheritance, it was about /inference/, and what I actually had was: Name(x, Aristotle) -> Species(x, Man) Species(x, Man) -> Mortality(x, Mortal) |= Name(x, Aristotle) -> Mortalilty(x, Mortal) No types or reification in sight. Instead I had two groups of statements: People = {Name, Species, Bday} Entities = {Species, Mortality} A join of the two statements gave me the inference I required: {Name, Mortality}. All of a sudden it seemed simple. So some questions: 1) So why not treat all 'inheritance' in this way? 2) Could one extend to include 'behaviour' as well? 3) And is this a crazy thing to suggest in a cross post to an OO group? > > > The > > problem with wikipedia is any ignorant fool can just start typing > > nonsense. Even when one follows the requirements for references to > > primary sources, the quality of the end product can vary over many > > orders of magnitude. > > -T-
Robert Martin wrote: > On 2008-03-03 12:46:09 -0600, topmind <topmind@technologist.com> said: > > > It depends on how you use the DB. In Robert Martin's version of the > > payroll application, the DB is almost reduced to a dumb filing system > > ("persistence layer") because the app code does all the work. However, > > in my version: > > > > http://www.geocities.com/tablizer/payroll2.htm > > > > I *leveraged* the DB so that much if not most of the work is done by > > the database and queries *instead* of the app code. > > > Well, that might be a bit of an exaggeration. Here's just one part of > the code in your example: > > <cffunction name=3D"printStubs"> > <cfquery name=3D"stubQry" datasource=3D"#dsn#"> > SELECT * > FROM payStubs, employees, payItems > WHERE empRef =3D empID AND payItemRef =3D payItemID > ORDER BY payDate, empRef, lineNum > </cfquery> > <!--- <cfdump var=3D"#stubQry#"> [for debugging] ---> > <h3>Pay Stub Samples</h3> > <cfset shadeClr =3D "##f0f0f0"> > <cfoutput query=3D"stubQry" group=3D"empID"> <!--- outer group by emp= ID ---> > <cfset sumPay =3D 0> <!--- init ---> > <hr> > <table border=3D1 cellpadding=3D2 cellspacing=3D0> > <tr> > <td bgcolor=3D"#shadeClr#" width=3D"25%">Name:</td> > <td colspan=3D3>#stubQry.lastName#, #stubQry.firstName# > #stubQry.middle#</td> > </tr> > <tr> > <td bgcolor=3D"#shadeClr#">Empl. ID:</td> > <td colspan=3D3>#stubQry.empID#</td> > </tr> > <tr> > <td bgcolor=3D"#shadeClr#">Payroll Date:</td> > <td colspan=3D3>#stubQry.payDate#</td> > </tr> > <!--- column headings ---> > <tr bgcolor=3D"#shadeClr#"> > <th colspan=3D2>Item<br>Description</th> > <th>Reference<br>Value</th> > <th>Pay<br>Amount</th> > </tr> > <!--- for each line item ---> > <cfoutput> > <!--- skip line if zero-amount suppression is on ---> > <cfif Not (stubQry.displayOptions Contains '(supprzero)' > And stubQry.amount EQ 0.00)> > <tr> > <td colspan=3D2>#stubQry.lineDescript#</td> > <td align=3D"right"> > <cfif stubQry.sumMult is 0> > #stubQry.referenceAmt# > <cfelse> > > </cfif> > </td> > <td align=3D"right"> > <cfif stubQry.sumMult NEQ 0> > > #numberFormat(stubQry.amount,"-9,999,999,999.99")# > <cfelse> > > </cfif> > </td> > </tr> > </cfif> > <cfset sumPay =3D sumPay + stubQry.amount> > </cfoutput> > > <!--- Total Line ---> > <tr> > <td colspan=3D3 align=3D"right"><b>Total</b></td> > <td > align=3D"right">#numberFormat(sumPay,"-9,999,999,999.99")#</td> > </tr> > </table> > </cfoutput> > </cffunction> > > That's a lot of processing code. Compared to? > > However, I think your point is not completely invalid. You *did* use > more database facilities than I did. Was your code smaller or better? > From the above I would say that it was not better. In the C++ example > from my book "Agile Software Development, Principles, Patterns, and > Practices" you won't see any methods that are even a quarter as long as > what you have there. I could have shortened it if I wanted to by splitting it up, but I saw no need in this case. The size can be shortened without resorting to OOP. Clarity and shortness of methods/functions are not necessarily the same thing. Then again, everybody likes things a different way. Software engineering is largely about psychology, and everybody's psychology is different. What trips you up may not trip me up, and vise versa. But I think anybody inspecting both examples will clearly see that my version is a lot less total code. > As for whether it was smaller, it's hard to say > because the two programs don't do very similar things. You used a very > different set of requirements than I did. I satisfied almost everything on your opening list of requirements, plus allowing more line items to be created without altering existing code. You talk a lot about adding or changing things without it impacting lots of code. By not hard-wiring formulas and line-items into the code, I improved on this metric. > > Still, your point is valid in that I purposely pulled out any kind of > SQL from my example, and made use of C++ code rather than database > tools -- to the extent that I even did a linear search through all > employees rather than using the database to efficiently query them. > This was done on purpose because the book is an exposition on OO design > principles as opposed to database principles. > > You might complain that a book about writing good software *should* > have database concepts mixed in with it. I sympathize, but authors > must make choices, just as you did in your example. If I was writing a formal book on software engineering, I hopefully would at least point out the pro's and con's of various design decisions. We both came up with very different designs for solving essentially the same problem. This is because the implementation options are wide open. > -- > Robert C. Martin (Uncle Bob)=EF=BF=BD=EF=BF=BD| email: unclebob@objectment= or.com > Object Mentor Inc.=EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF= =BD=EF=BF=BD| blog:=EF=BF=BD=EF=BF=BDwww.butunclebob.com -T-
On Mar 3, 3:06 pm, JOG <j...@cs.nott.ac.uk> wrote: > On Mar 3, 9:57 pm, topmind <topm...@technologist.com> wrote: > > > > > On Mar 3, 10:54 am, Bob Badour <bbad...@pei.sympatico.ca> wrote: > > > > topmind wrote: > > > > JOG wrote: > > > > >>On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: > > > > >>>All attempts by applications to access a DB's tables and columns > > > >>>directly violates design principles that guard against close-coupling. > > > >>>This is a basic design tenet for OO. Violating it when jumping from OO > > > >>>to RDB is, I think, the source of problem that are collectively and > > > >>>popularly referred to as the object-relational impedance mismatch. > > > > >>I wondered if we might be able to come up with some agreement on what > > > >>object-relational impedence mismatch actually means. I always thought > > > >>the mismatch was centred on the issue that a single object != single > > > >>tuple, but it appears there may be more to it than that. > > > > >>I was hoping perhaps people might be able to offer perspectives on the > > > >>issues that they have encountered. One thing I would like to avoid > > > >>(outside of almost flames of course), is the notion that database > > > >>technology is merely a persistence layer (do people still actually > > > >>think that?) - I wonder if the 'mismatch' stems from such a > > > >>perspective. > > > > > This came up in a nearby message. I borrowed the following text from > > > > wikipedia: > > > > The text had too many blatant errors to start enumerating them all. > > > Most of them are statements about philosophy or practice rather than > > absolutes; thus its hard for them to be objectively or "blatantly" > > wrong. Whether that's a good thing or not is another issue. I see the > > list as a starting point for discussion even if it does not settle > > everything. > > > It brings up interesting questions, such as why not have schema > > inheritance? If inheritance is good or OO, why is it not good for > > relational schema's? The answer is that OO and relational approach > > things differently. > > This reminds me of a serious 'click' moment I had with data > structures. A long time ago, in a galaxy far far away, I once sat > blindly reinventing my own network model, with identifiers, pointers, > types, and all the fun of the fair. As an OO programmer it was the > only mindset I had. Then I considered encoding data similar to the > classic example of: > > * Aristotle is a man > * All men are mortal > * |= Aristotle is mortal > > Well, /clearly/ what I was dealing with here was a generic entity > class, of which Man was a subclass, and Aristotle an instance. > Something like: > > class Entity > { > boolean mortal; > string name; > Entity(_name, _mortal) : name(_name), mortal(_mortal); > > }; > > class Man : public Entity > { > date bday; > Man(_name, _mortal) : Entity(_name, _mortal), bday(_bday); > > }; > > But, as I extended the example, the structures got more convoluted, > and the result more and more of a mess. Finally something clicked. It > wasn't about types, objects or inheritance, it was about /inference/, > and what I actually had was: > > Name(x, Aristotle) -> Species(x, Man) > Species(x, Man) -> Mortality(x, Mortal) > |= Name(x, Aristotle) -> Mortalilty(x, Mortal) > > No types or reification in sight. Instead I had two groups of > statements: > People = {Name, Species, Bday} > Entities = {Species, Mortality} > > A join of the two statements gave me the inference I required: {Name, > Mortality}. All of a sudden it seemed simple. So some questions: > > 1) So why not treat all 'inheritance' in this way? > 2) Could one extend to include 'behaviour' as well? > 3) And is this a crazy thing to suggest in a cross post to an OO > group? > I think you'd like Prolog. And, I think a Prolog-relational merger would be easier than an OO-Prolog merger. > > > > > The > > > problem with wikipedia is any ignorant fool can just start typing > > > nonsense. Even when one follows the requirements for references to > > > primary sources, the quality of the end product can vary over many > > > orders of magnitude. > -T-
topmind wrote: > > Robert Martin wrote: > >>On 2008-03-03 12:46:09 -0600, topmind <topmind@technologist.com> said: [snip] >>You might complain that a book about writing good software *should* >>have database concepts mixed in with it. I sympathize, but authors >>must make choices, just as you did in your example. Oh my god, the idiot is an author?!? For his sake, it's a good thing we don't have any concept of writing malpractice. He'd be giving a lot of refunds and spending a lot of time in court.
David Cressey wrote: > "Bob Badour" <bbadour@pei.sympatico.ca> wrote in message > news:47cc383f$0$4041$9a566e8b@news.aliant.net... > >>It's pretty obvious to me: object-relational mismatch is to relations as >>assembler-object mismatch is to objects. > > I didn't get this comment. Now that someone else has flagged it as a > keeper, I feel the need to ask for an explanation. What do you know about assembler?
David Cressey wrote: > "Robert Martin" <unclebob@objectmentor.com> wrote in message > news:2008030316112582327-unclebob@objectmentorcom... > > >>Ladies and gentlemen, there are certainly tasks that are better suited >>to SQL and stored procedures. There are other tasks that are better >>suited to general purpose languages. True wisdom comes from knowing >>the strengths and weaknesses of both. Sadly, Robert Martin has demonstrated again and again that he lacks any wisdom let alone true wisdom. [snip]
On Mar 3, 2:18 pm, JOG <j...@cs.nott.ac.uk> wrote: > On Mar 3, 9:50 pm, Robert Martin <uncle...@objectmentor.com> wrote: > > > There is indeed more to it than that. OO and RDB are both strategies > > for partitioning data. However, the motivation behind the partitioning > > is completely different. OO partitions data based on the way a > > particular application will process that data. > > Is it really as clean cut as that? Boy howdy, no, it's not. For one thing, neither OO nor the RM are strategies for partitioning data. One's requirements dictate data with certain functional dependencies. Among the easiest tasks in application design is one of the earliest: structuring the data. The RM structures data as relations. Relational structures lack query bias, which is one of the reasons why SQL is so good at ad hoc queries (compared to the other choices.) OO structures mandates an object structure (unsurprisingly): out the gate, one has no choice but to build in a query bias, whether one wants it or not. Furthermore, since OOPLs lack physical independence, traversing the graph may be quite expensive, particularly in the case where the graph is backed by storage in a database, which is part of why ORM is such a universally bad idea. Of course, one can always build physical independence into one's objects, but that has to be done manually, one pathway at a time. > > > One thing I would like to avoid > > > (outside of almost flames of course), is the notion that database > > > technology is merely a persistence layer (do people still actually > > > think that?) - I wonder if the 'mismatch' stems from such a > > > perspective. > > > Take out the word "merely", and recognize that "persistence" is more > > than just storage. > > Perhaps you could expand? I was referring to the fact that databases > do more than 'persist' objects. DBMSs don't even necessarily persist at all. I've said in the past that persistence isn't even a first-tier dbms feature. It's a strong second-tier feature, to be sure! But a dbms with no persistence still has strong use-cases. Which makes it clear enough that anyone who considers SQL to be about persistence *primarily* (if not exclusively) hasn't yet identified what the first-tier feature even *are.* Marshall
On Mar 3, 8:52 am, JOG <j...@cs.nott.ac.uk> wrote: > > I was hoping perhaps people might be able to offer perspectives > on the issues that they have encountered. One piece of the puzzle that is often neglected is a qualitative difference in the type systems of some of the most popular OOPLs vs. SQL or a hypothetical relational language. This difference doesn't get a lot of ink in industrial settings but it's quite important. C++ and Java and many similar languages are nominally typed; SQL is structurally typed. For example, suppose I have two Java classes as follows (toy example for illustrative purposes): class Foo { int x; int y; }; class Bar { int x, int y; }; The two classes are different; one cannot use an instance of one as an instance of the other under any circumstances, not even with a cast. In SQL, if I have two relations with x and y int columns, I can union them, or join on them, or whatever. There is no way, in fact, to forbid such a thing, just like in Java there is no way to allow such a thing. One particular manifestation of this difference is that if one is writing an ORM for Java, one has to address the issue that one needs a new class for every distinct column-set of every query one has. This fact sometimes drives unfortunate design decisions. Marshall
"JOG" wrote: > > This reminds me of a serious 'click' moment I had with data > structures. A long time ago, in a galaxy far far away, I once sat > blindly reinventing my own network model, with identifiers, pointers, > types, and all the fun of the fair. As an OO programmer it was the > only mindset I had. Then I considered encoding data similar to the > classic example of: > > * Aristotle is a man > * All men are mortal > * |= Aristotle is mortal > > Well, /clearly/ what I was dealing with here was a generic entity > class, of which Man was a subclass, and Aristotle an instance. > Something like: > > class Entity > { > boolean mortal; > string name; > Entity(_name, _mortal) : name(_name), mortal(_mortal); > }; > > class Man : public Entity > { > date bday; > Man(_name, _mortal) : Entity(_name, _mortal), bday(_bday); > }; > > > But, as I extended the example, the structures got more convoluted, > and the result more and more of a mess. Finally something clicked. It > wasn't about types, objects or inheritance, it was about /inference/, > and what I actually had was: > > Name(x, Aristotle) -> Species(x, Man) > Species(x, Man) -> Mortality(x, Mortal) > |= Name(x, Aristotle) -> Mortalilty(x, Mortal) > > No types or reification in sight. Instead I had two groups of > statements: > People = {Name, Species, Bday} > Entities = {Species, Mortality} > > A join of the two statements gave me the inference I required: {Name, > Mortality}. All of a sudden it seemed simple. So some questions: > > 1) So why not treat all 'inheritance' in this way? > 2) Could one extend to include 'behaviour' as well? > 3) And is this a crazy thing to suggest in a cross post to an OO > group? I'm jumping in realizing that I may be revealing a bit of ignorance here, so bear with me. If I need to send a message to all persons who are mortal, I could join People and Entities together by species and where Mortality is true? Once I've performed the join, I then dispatch a message to the resulting group: "GetAnnualCheckUp()" or something? In the application I'm currently writing, I have a large list of parameters. These parameters share some values such as Name and Label. However, there are different types of parameters such as those that represent a boolean state ("on/off"), a selection of choices, e.g. "Sine", "Sawtooth", "Square", etc., and other types. My current approach is traditional OO, and that's to have a hierarchy of Parameter classes. I can then keep a list of heterogeneous parameters and treat them polymorphically. For example: params[AmplitudeId] = new FloatParameter("Amplitude", "dB", 0.0f, 1.0f); params[WaveId] = new SelectionParameter("Wave", "Type", WaveNames); // Set a parameter value. All raw parameter values are in the range // of [0, 1]. Each parameter class knows how to transform raw parameter // values in an appropriate way. params[parameterId].SetValue(value); And all works well. I'm trying to wrap my head around how I would approach this problem using what you've described above. I could have a table representing attributes common to all parameters. Then tables representing attributes specific to one parameter type or another. I could relate the tables via foreign keys? When I need to dispatch parameter changes, I could join the appropriate tables and change the appropriate values? I'm open to new ways of approaching things, but the mechanism to implement all of this needs to be fast, in my case. Parameter changes can come in at hundreds of times a second.
On Mar 4, 12:47 am, "Leslie Sanford" <jabberdab...@bitemehotmail.com> wrote: > "JOG" wrote: > > > This reminds me of a serious 'click' moment I had with data > > structures. A long time ago, in a galaxy far far away, I once sat > > blindly reinventing my own network model, with identifiers, pointers, > > types, and all the fun of the fair. As an OO programmer it was the > > only mindset I had. Then I considered encoding data similar to the > > classic example of: > > > * Aristotle is a man > > * All men are mortal > > * |= Aristotle is mortal > > > Well, /clearly/ what I was dealing with here was a generic entity > > class, of which Man was a subclass, and Aristotle an instance. > > Something like: > > > class Entity > > { > > boolean mortal; > > string name; > > Entity(_name, _mortal) : name(_name), mortal(_mortal); > > }; > > > class Man : public Entity > > { > > date bday; > > Man(_name, _mortal) : Entity(_name, _mortal), bday(_bday); > > }; > > > But, as I extended the example, the structures got more convoluted, > > and the result more and more of a mess. Finally something clicked. It > > wasn't about types, objects or inheritance, it was about /inference/, > > and what I actually had was: > > > Name(x, Aristotle) -> Species(x, Man) > > Species(x, Man) -> Mortality(x, Mortal) > > |= Name(x, Aristotle) -> Mortalilty(x, Mortal) > > > No types or reification in sight. Instead I had two groups of > > statements: > > People = {Name, Species, Bday} > > Entities = {Species, Mortality} > > > A join of the two statements gave me the inference I required: {Name, > > Mortality}. All of a sudden it seemed simple. So some questions: > > > 1) So why not treat all 'inheritance' in this way? > > 2) Could one extend to include 'behaviour' as well? > > 3) And is this a crazy thing to suggest in a cross post to an OO > > group? > > I'm jumping in realizing that I may be revealing a bit of ignorance here, so > bear with me. > > If I need to send a message to all persons who are mortal, I could join > People and Entities together by species and where Mortality is true? > > Once I've performed the join, I then dispatch a message to the resulting > group: "GetAnnualCheckUp()" or something? Aye. Behaviour sits on top. > > In the application I'm currently writing, I have a large list of parameters. > These parameters share some values such as Name and Label. However, there > are different types of parameters such as those that represent a boolean > state ("on/off"), a selection of choices, e.g. "Sine", "Sawtooth", "Square", > etc., and other types. My current approach is traditional OO, and that's to > have a hierarchy of Parameter classes. I can then keep a list of > heterogeneous parameters and treat them polymorphically. For example: > > params[AmplitudeId] = new FloatParameter("Amplitude", "dB", 0.0f, 1.0f); > params[WaveId] = new SelectionParameter("Wave", "Type", WaveNames); > > // Set a parameter value. All raw parameter values are in the range > // of [0, 1]. Each parameter class knows how to transform raw parameter > // values in an appropriate way. > params[parameterId].SetValue(value); > > And all works well. If it works well don't touch it. As if I need to tell you that. > > I'm trying to wrap my head around how I would approach this problem using > what you've described above. I could have a table representing attributes > common to all parameters. Then tables representing attributes specific to > one parameter type or another. I could relate the tables via foreign keys? > When I need to dispatch parameter changes, I could join the appropriate > tables and change the appropriate values? In your example all parameters had a name, and a value right? Well, then you'd have a set of {paramName, value} tuples, where you could update/insert values as necessary. (I'm assuming that we don't need paramID and the paramName attribute uniquely identifies each parameter). In terms of other attributes recommended design would suggest a relation for 'floatAttributes', and one for 'selectionAttibutes'. They have different structures, and hence are collated under different predicates (with just the paramName again in common and functioning as a key). You'd join those to the values-relation as appropriate or necessary via the paramName key. I imagine that sounds wacky as hell if you are not used to a relational approach, but there's no reason the functionality couldn't be built into a programming language (as opposed to sitting in an external db). As Codd showed in the 'great debate' it'd certainly simplify querying the data, compared to std::iterating like its 1999. > > I'm open to new ways of approaching things, but the mechanism to implement > all of this needs to be fast, in my case. Parameter changes can come in at > hundreds of times a second. Well, I'm talking theoretically atm - I'd certainly not abandon everything for mysql given your requirements...
On Mar 3, 11:52=A0am, JOG <j...@cs.nott.ac.uk> wrote: > I wondered if we might be able to come up with some agreement on what > object-relational impedence mismatch actually means. Yea, that will be the day. :-) As I see it, the relational model deals with data, while the object model deals with behavior. With the RM, data lies dormant unless it is explicitly told to change state, and what state to change to. With OM, an object is informed of the state of its environment and it changes its state as *it* sees fit. With the above two descriptions, the "impedance mismatch" can easily be seen.
Leslie Sanford wrote: > "JOG" wrote: > I'm open to new ways of approaching things, but the mechanism to implement > all of this needs to be fast, in my case. Parameter changes can come in at > hundreds of times a second. If the run-time performance needs to be predictable, then a RDBMS is probably not the way to go. Garbage collection and index shifting/ restructuring may result in occasional pauses or slowdowns. Higher abstraction often results in less predictable run-time performance. (Less predictable and "slow" are not necessarily the same thing.) -T-
On Mon, 3 Mar 2008 15:54:16 -0600, Robert Martin wrote: > On 2008-03-03 11:36:50 -0600, "David Cressey" <cressey73@verizon.net> said: > >> There is, however, a different level of abstraction where an RDB is >> two-dimensional. So Tom is not "wrong" all the way. And it may be at that >> level of abstraction where the OO RM impedance match comes about. > > I don't know. Computer memory is one-dimensional. (I remotely remember some early works on associative memory. A spatial memory would be very interesting to have.) I think that probably more important was that an execution path is single dimensional and that many algorithms require existence of an order (that includes the algorithms used in RDBMS). The balance could change, for example under super parallel architectures. In a molecular computer a CPU might become cheaper than a memory cell... -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Mon, 03 Mar 2008 19:02:40 -0400, Bob Badour wrote: > One can represent any graph on two-dimensional media just as one can > represent any relation on two-dimensional media. In fact, since a graph > is merely a set of vertices and a set of directed edges and since one > can easily represent vertices and directed edges as tuples, one can > easily represent any graph using relations. In fact graph *is* a binary relation over the set of nodes. The notation used in graph theory is relational. For instance, edges are denoted as aGb. G is the graph. So what? That proves / refutes nothing. Observe that any program is an integer number. This certainly should make integer arithmetic hugely useful in for software design. Let Windows XP be n, then to n+m would be Vista, n/2 would Windows ME... Argumentation to Turing/other-completeness is a fallacy in software design disciplines. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Mon, 3 Mar 2008 23:03:41 +0000, Eric wrote: > On 2008-03-03, Robert Martin <unclebob@objectmentor.com> wrote: >> >> There is indeed more to it than that. OO and RDB are both strategies >> for partitioning data. No. OO stands for modeling, that might include data being modeled or serving as models. It also might mean absence of data. In short, data are irrelevant. >> However, the motivation behind the partitioning >> is completely different. OO partitions data based on the way a >> particular application will process that data. RDBs partition data >> based on how many different applications will need to access that data. > > No, RDBs partition data so that it is sensibly and easily available to > any possible application. So if you use OO you are saying "there will > never be any other application that will need my data". No, it is engineering which says so. It translates as "put the requirements first," or simpler "pigs do not fly." -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On 4 mrt, 00:02, Bob Badour <bbad...@pei.sympatico.ca> wrote: > Roy Hann wrote: > > "Thomas Gagne" <tga...@wide-open-west.com> wrote in message > >news:7vqdnf21dLOnrVHanZ2dnUVZ_tuonZ2d@wideopenwest.com... > > >>JOG wrote: > > >>>I wondered if we might be able to come up with some agreement on what > >>>object-relational impedence mismatch actually means. I always thought > >>>the mismatch was centred on the issue that a single object != single > >>>tuple, but it appears there may be more to it than that. > > >>The issue as I've discovered it has to do with the fact OO systems are > >>composed of graphs of data and RDBs are two-dimensional. > > That is a remarkably uninformed and ill-conceived sentence. It's rather > like comparing boats and cars saying boats have hulls and cars are > pretty. Regardless of truth, the comparison is pointless on its face. > > One can represent any graph on two-dimensional media just as one can > represent any relation on two-dimensional media. In fact, since a graph > is merely a set of vertices and a set of directed edges and since one > can easily represent vertices and directed edges as tuples, one can > easily represent any graph using relations. > > The Great Debate was had about 3 decades ago and graphs lost. I beg to differ. What lost was the idea that a close coupling is required between how the data is stored and how it is accessed / queried, i.e., that in that sense you cannot have data independence. But such data independence can be just as well achieved with graph- based data models. -- Jan Hidders
Jan Hidders wrote: > On 4 mrt, 00:02, Bob Badour <bbad...@pei.sympatico.ca> wrote: > >>Roy Hann wrote: >> >>>"Thomas Gagne" <tga...@wide-open-west.com> wrote in message >>>news:7vqdnf21dLOnrVHanZ2dnUVZ_tuonZ2d@wideopenwest.com... >> >>>>JOG wrote: >> >>>>>I wondered if we might be able to come up with some agreement on what >>>>>object-relational impedence mismatch actually means. I always thought >>>>>the mismatch was centred on the issue that a single object != single >>>>>tuple, but it appears there may be more to it than that. >> >>>>The issue as I've discovered it has to do with the fact OO systems are >>>>composed of graphs of data and RDBs are two-dimensional. >> >>That is a remarkably uninformed and ill-conceived sentence. It's rather >>like comparing boats and cars saying boats have hulls and cars are >>pretty. Regardless of truth, the comparison is pointless on its face. >> >>One can represent any graph on two-dimensional media just as one can >>represent any relation on two-dimensional media. In fact, since a graph >>is merely a set of vertices and a set of directed edges and since one >>can easily represent vertices and directed edges as tuples, one can >>easily represent any graph using relations. >> >>The Great Debate was had about 3 decades ago and graphs lost. > > I beg to differ. What lost was the idea that a close coupling is > required between how the data is stored and how it is accessed / > queried, i.e., that in that sense you cannot have data independence. > But such data independence can be just as well achieved with graph- > based data models. > > -- Jan Hidders Well, I have yet to see it. Until I see any real evidence of a graph-based data model with physical data independence, we will just have to disagree.
On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Mon, 3 Mar 2008 23:03:41 +0000, Eric wrote: > >> On 2008-03-03, Robert Martin <unclebob@objectmentor.com> wrote: >>> >>> There is indeed more to it than that. OO and RDB are both strategies >>> for partitioning data. > > No. OO stands for modeling, that might include data being modeled or > serving as models. It also might mean absence of data. In short, data are > irrelevant. > >>> However, the motivation behind the partitioning >>> is completely different. OO partitions data based on the way a >>> particular application will process that data. RDBs partition data >>> based on how many different applications will need to access that data. >> >> No, RDBs partition data so that it is sensibly and easily available to >> any possible application. So if you use OO you are saying "there will >> never be any other application that will need my data". > > No, it is engineering which says so. It translates as "put the requirements > first," or simpler "pigs do not fly." > So no-one ever says "we should be able to get that stuff out of the xyz application and combine it with our data so that we can..."! E
On Tue, 4 Mar 2008 15:41:40 +0000, Eric wrote: > On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >> On Mon, 3 Mar 2008 23:03:41 +0000, Eric wrote: >> >>> No, RDBs partition data so that it is sensibly and easily available to >>> any possible application. So if you use OO you are saying "there will >>> never be any other application that will need my data". >> >> No, it is engineering which says so. It translates as "put the requirements >> first," or simpler "pigs do not fly." > > So no-one ever says "we should be able to get that stuff out of the xyz > application and combine it with our data so that we can..."! You should plan this use case in advance. That would be a requirement. A system can only do things it was designed for. (This applies to RDBMS as well). For each application exist things it cannot do. That implies: either A) there will never be any other application that will ask to do these, or B) the application is incorrect (= does not fulfill the requirements). -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Tue, 4 Mar 2008 15:41:40 +0000, Eric wrote: > >> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>> On Mon, 3 Mar 2008 23:03:41 +0000, Eric wrote: >>> >>>> No, RDBs partition data so that it is sensibly and easily available to >>>> any possible application. So if you use OO you are saying "there will >>>> never be any other application that will need my data". >>> >>> No, it is engineering which says so. It translates as "put the requirements >>> first," or simpler "pigs do not fly." >> >> So no-one ever says "we should be able to get that stuff out of the xyz >> application and combine it with our data so that we can..."! > > You should plan this use case in advance. That would be a requirement. A > system can only do things it was designed for. (This applies to RDBMS as > well). For each application exist things it cannot do. That implies: either > A) there will never be any other application that will ask to do these, or > B) the application is incorrect (= does not fulfill the requirements). > So you will always know, in advance, what all the possible future applications will want! I hope you are not crazy enough to believe that. So you are left with minimising "things it cannot do". I guess that means you should have something which can make the data available to any application that asks, according to any logically possible criterion. Did you know that this is what an RDBMS does? Perhaps not, since you have also said that "data are irrelevant". E
Responding to JOG... >> All attempts by applications to access a DB's tables and columns >> directly violates design principles that guard against close-coupling. >> This is a basic design tenet for OO. Violating it when jumping from OO >> to RDB is, I think, the source of problem that are collectively and >> popularly referred to as the object-relational impedance mismatch. > > I wondered if we might be able to come up with some agreement on what > object-relational impedence mismatch actually means. I always thought > the mismatch was centred on the issue that a single object != single > tuple, but it appears there may be more to it than that. First, I think it is important to clarify that the 'relational' in the mismatch isn't referring to the fact that the OO paradigm uses something other than set theory's relational model. The nature of the impedance mismatch lies in the way the OO and RDB paradigms implement the same relational model. I think the lack of 1:1 tuple mapping is just a symptom of the mismatch. There are several contributors to the mismatch... Applications (not just OO) are designed to solve specific problems so they are highly tailored to the particular problem in hand. In contrast, databases are designed to provide ad hoc, generic access to data that is independent of particular problem contexts. If one had to choose a single characterization of the mismatch, this would be it; everything else stems from it. Object properties include behavior. Behaviors interact in much more complex ways than data. Managing behaviors is the primary cause of failing to map 1:1 between OO Class Diagrams and Data Models of the same subject matter. That's because managing behavior places additional constraints on the way the software is constructed. OO relationships are instantiated at the object (tuple) level rather than the class (table) level. This allows much better tailoring of optimization to the problem in hand. It also focuses on capturing business rules and policies in the way relationships are instantiated. That, in turn, emphasizes preselecting sets of entities before they are actually accessed. Thus query-like searches for object collaborations are relatively rare in well-formed OO applications. Corollary: the OO paradigm navigates relationship paths consisting of individual binary associations and sequentially processes object sets resulting from such navigation. Thus there is no direct equivalent of an RDB join in OOPL or AAL syntax. (One can argue that the query/join approach is less tedious, but the OO paradigm has additional goals to satisfy, such as limiting access to knowledge.) Object identity is usually not explicitly embedded as an attribute of the object; OO applications are designed around address-based identity in computer memory. This profoundly changes the way one manages referential integrity. Thus OO developers will avoid class-level identity searches whenever possible. The relations in OO generalizations cannot be instantiated separately; a single tuple resolves the entire generalization. This is the one situation where a Class Model and a Data Model can never map 1:1. The reason lies in the OO paradigm's support of polymorphism. > I was hoping perhaps people might be able to offer perspectives on the > issues that they have encountered. One thing I would like to avoid > (outside of almost flames of course), is the notion that database > technology is merely a persistence layer (do people still actually > think that?) - I wonder if the 'mismatch' stems from such a > perspective. The short answer is that any OO application developer sees the DBMS as an implementation of a persistence layer. I think it is important to distinguish between pure persistence in the form of an RDB and a bundle of specialized server-side applications that are layered on top of an RDB and form a DBMS. Some CRUD/USER processing can be quite complex, such as data mining, but from the end customer's perspective all the server-side applications are providing is data access and formatting. Similarly, it is important to distinguish between CRUD/USER processing and other problems. In CRUD/USER processing the only problems being solved for the customer are data entry, data selection, and conversion to a convenient display representation. The RAD IDEs and layered model infrastructures already handle that sort of processing quite well (e.g., it is no accident that they employ form-based UIs that conveniently map into RDB tables) and applying OO development there would be largely redundant. Thus OO developers always believe that a database is a persistence mechanism because they deal with problems outside CRUD/USER processing. IOW, the OO application's solution *starts* with accessing data from a persistent store and *ends* with shipping results off for display rendering. That problem solution doesn't care what kind of data access services the DBMS may provide; it just wants to access and store particular piles of data. Similarly, it doesn't care whether user communications are via GUI, web browser, or heliograph. To put it more bluntly, from the OO application's solution perspective, the developer couldn't care less that the data was mined from multiple sources using exotic algorithms or whether it is stored in an RDB, an OODB, flat files, or on clay tablets. At the level of abstraction of the OO problem solution, only two services are required: "Save this pile of data I call 'X'" and "Give me the pile of data I call 'X'". Thus the entire interface for accessing persistence from an OO application's problem solution is typically just three messages of the form {message ID, [data packet]} that might look something like: {SAVE_DATA, data ID, dataset} // to persistence {GET_DATA, data ID} // to persistence {HERE_IS_DATA, data ID, dataset} // response from persistence The application solution will provide its own unique encode/decode of the message data packets into its objects and their attributes that is completely independent of the persistence schemas, etc.. Bottom line: the DBMS may provide all sorts of elegant CRUD/USER access services but the OO application doesn't care about that; that belongs to a different trade union. <aside> As a practical matter, the client-side does care because somehow those messages need to be mapped into the server-side DBMS services (e.g., creating SQL queries, performance caching, and optimizing joins for the DBMS). But to do that one only needs to provide the mapping once in a subsystem that is reusable by any application that accesses that DBMS. Typically that subsystem would be designed and implemented by someone who has specialized DBA skills to utilize the DBMS services in an appropriately clever fashion. IOW, the subsystem represents a fundamental separation of concerns from the specific problem solution by isolating and encapsulating specific mechanisms and optimizations related to persistence access. Note that when developing large OO applications, one does this sort of subsystem encapsulation for *all* subsystems within the application; UI and DB subsystems just happen to be ubiquitous concerns. One does OO development because one wants maintainable applications. Hence separation of concerns and encapsulation at the subsystem level is critically important for decoupling implementations in different parts of the application. </aside> -- There is nothing wrong with me that could not be cured by a capful of Drano. H. S. Lahman hsl@pathfindermda.com Pathfinder Solutions http://www.pathfindermda.com blog: http://pathfinderpeople.blogs.com/hslahman "Model-Based Translation: The Next Step in Agile Development". Email info@pathfindermda.com for your copy. Pathfinder is hiring: http://www.pathfindermda.com/about_us/careers_pos3.php. (888)OOA-PATH
On Tue, 4 Mar 2008 17:58:02 +0000, Eric wrote: > On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >> On Tue, 4 Mar 2008 15:41:40 +0000, Eric wrote: >> >>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>>> On Mon, 3 Mar 2008 23:03:41 +0000, Eric wrote: >>>> >>>>> No, RDBs partition data so that it is sensibly and easily available to >>>>> any possible application. So if you use OO you are saying "there will >>>>> never be any other application that will need my data". >>>> >>>> No, it is engineering which says so. It translates as "put the requirements >>>> first," or simpler "pigs do not fly." >>> >>> So no-one ever says "we should be able to get that stuff out of the xyz >>> application and combine it with our data so that we can..."! >> >> You should plan this use case in advance. That would be a requirement. A >> system can only do things it was designed for. (This applies to RDBMS as >> well). For each application exist things it cannot do. That implies: either >> A) there will never be any other application that will ask to do these, or >> B) the application is incorrect (= does not fulfill the requirements). >> > So you will always know, in advance, what all the possible future > applications will want! I hope you are not crazy enough to believe that. No, I am. When I am looking for a solution I have to know what is the problem. Is that crazy? Further, dealing with a generalized problem I shall consider what would be the consequences of such generalization. There is always a price to pay. You certainly have heard about computability, NP problems and such stuff. But just going from 1ms to 100�s makes a huge difference. > So you are left with minimising "things it cannot do". I guess that > means you should have something which can make the data available to any > application that asks, according to any logically possible criterion. > Did you know that this is what an RDBMS does? No it does not, when "asking" is defined as diffuse as in the natural language. There exist certain limitations on what and how can be asked. These limitations should be specified as functional and non-functional requirements. If you prefer to buy a cat in the bag named RDBMS (or whatever), that's up to you. I merely state that there is always something in any bag. As for the bag RDBMS, among the thing it contains are object-relational impedance, SQL, poor performance, unpredictable behavior, maintenance costs, etc. > Perhaps not, since you have also said that "data are irrelevant". Yes, I did. I am working mainly in the area of industrial data acquisition and control. It might sound funny, but being so close to "data" one starts to better understand why data are irrelevant. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:4bza4t1lmhj9.1734ger7g3zac$.dlg@40tude.net... > On Tue, 4 Mar 2008 15:41:40 +0000, Eric wrote: > > > On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > >> On Mon, 3 Mar 2008 23:03:41 +0000, Eric wrote: > >> > >>> No, RDBs partition data so that it is sensibly and easily available to > >>> any possible application. So if you use OO you are saying "there will > >>> never be any other application that will need my data". > >> > >> No, it is engineering which says so. It translates as "put the requirements > >> first," or simpler "pigs do not fly." > > > > So no-one ever says "we should be able to get that stuff out of the xyz > > application and combine it with our data so that we can..."! > > You should plan this use case in advance. That would be a requirement. A > system can only do things it was designed for. (This applies to RDBMS as > well). For each application exist things it cannot do. That implies: either > A) there will never be any other application that will ask to do these, or > B) the application is incorrect (= does not fulfill the requirements). > This is true for an RDBMS. But more to the point, can a relational (or SQL) database be designed in such a way that it has moderately good support for thousands of anticipated queries, of which only a few dozen will actually come to be used ? And will those few dozens of uses provide the appropriate payback on the investment in building the database? Based on my experience with databases, I offer the firm opinion that the answer is yes.
On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Tue, 4 Mar 2008 17:58:02 +0000, Eric wrote: > >> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>> On Tue, 4 Mar 2008 15:41:40 +0000, Eric wrote: >>> >>>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>>>> On Mon, 3 Mar 2008 23:03:41 +0000, Eric wrote: >>>>> >>>>>> No, RDBs partition data so that it is sensibly and easily available to >>>>>> any possible application. So if you use OO you are saying "there will >>>>>> never be any other application that will need my data". >>>>> >>>>> No, it is engineering which says so. It translates as "put the requirements >>>>> first," or simpler "pigs do not fly." >>>> >>>> So no-one ever says "we should be able to get that stuff out of the xyz >>>> application and combine it with our data so that we can..."! >>> >>> You should plan this use case in advance. That would be a requirement. A >>> system can only do things it was designed for. (This applies to RDBMS as >>> well). For each application exist things it cannot do. That implies: either >>> A) there will never be any other application that will ask to do these, or >>> B) the application is incorrect (= does not fulfill the requirements). >>> >> So you will always know, in advance, what all the possible future >> applications will want! I hope you are not crazy enough to believe that. > > No, I am. When I am looking for a solution I have to know what is the > problem. Is that crazy? Further, dealing with a generalized problem I shall > consider what would be the consequences of such generalization. There is > always a price to pay. You certainly have heard about computability, NP > problems and such stuff. But just going from 1ms to 100�s makes a huge > difference. Everything has a price. You have to choose. What I see is someone taking only the short-term view. > >> So you are left with minimising "things it cannot do". I guess that >> means you should have something which can make the data available to any >> application that asks, according to any logically possible criterion. >> Did you know that this is what an RDBMS does? > > No it does not, when "asking" is defined as diffuse as in the natural > language. There exist certain limitations on what and how can be asked. That's what I said - logically possible criteria. > These limitations should be specified as functional and non-functional > requirements. If possible. What I meant was that you should minimise the limitations on both the expected and the unknown futures. > If you prefer to buy a cat in the bag named RDBMS (or > whatever), that's up to you. I merely state that there is always something > in any bag. Cat? What cat? But actually, see what I said above about price. > As for the bag RDBMS, among the thing it contains are > object-relational impedance, You made this one up because you don't understand. > SQL, OK, it's not the perfect language, but what is? And it is possible to have an RDBMS that doesn't use it. > poor performance, Relative to what? Where are the tests? Do you install an RDBMS product and just go with whatever myths you have heard lately, or do you get a product specialist to sort it out? > unpredictable behavior, Please explain. Unless you're talking about bugs, but everything has those. > maintenance costs, Everything has those too. Again, I have to assume that you take only the short-term view. > etc. What else would you like to make up? > >> Perhaps not, since you have also said that "data are irrelevant". > > Yes, I did. I am working mainly in the area of industrial data acquisition > and control. It might sound funny, but being so close to "data" one starts > to better understand why data are irrelevant. > Aha! Your data is transient, and what you are mostly doing is transforming it. I at least have no problem with using OO programming for that. Also, that explains your short-term view. But what do you do with the data (presumably transformed) that does get kept for longer? Put it somewhere that will be available for a variety of expected and unexpected uses? But we were here before! E
On Tue, 04 Mar 2008 19:45:30 GMT, David Cressey wrote: > "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message > news:4bza4t1lmhj9.1734ger7g3zac$.dlg@40tude.net... >> On Tue, 4 Mar 2008 15:41:40 +0000, Eric wrote: >> >>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>>> On Mon, 3 Mar 2008 23:03:41 +0000, Eric wrote: >>>> >>>>> No, RDBs partition data so that it is sensibly and easily available to >>>>> any possible application. So if you use OO you are saying "there will >>>>> never be any other application that will need my data". >>>> >>>> No, it is engineering which says so. It translates as "put the requirements >>>> first," or simpler "pigs do not fly." >>> >>> So no-one ever says "we should be able to get that stuff out of the xyz >>> application and combine it with our data so that we can..."! >> >> You should plan this use case in advance. That would be a requirement. A >> system can only do things it was designed for. (This applies to RDBMS as >> well). For each application exist things it cannot do. That implies: either >> A) there will never be any other application that will ask to do these, or >> B) the application is incorrect (= does not fulfill the requirements). >> > > This is true for an RDBMS. But more to the point, can a relational (or > SQL) database be designed in such a way that it has moderately good support > for thousands of anticipated queries, of which only a few dozen will > actually come to be used ? And will those few dozens of uses provide the > appropriate payback on the investment in building the database? > Based on my experience with databases, I offer the firm opinion that the > answer is yes. That's OK to me. But the question, as I understood it, was about qualitative design changes. My point was that there is always a presumption of the nature of changes which can and of those that cannot happen. A design is good when this presumption matches the reality. From this point of view there is no difference between deployment of OO or RDB solutions. As engineering both would solve some class of problems and anticipate quantitative changes of certain kind, but not qualitative ones. When I read what Eric wrote, that made me think that he believed that RDB would solve all problems and thus anticipate any changes. This would be obviously wrong. I guess that his hidden argument was "because RDB apparently solves all problems, then data-centric view is the right one." Yet another logical fallacy was a data-centric problem statement: "there will never be any other application that will need my data," used in order to prove data-centric view itself. In OO problems are not modeled in terms of applications using data. This alone does not yet imply anything about the problems being solved. Actually we all are solving similar problems, otherwise there would be no such quarrel between us. P.S. OO is especially focused on maintenance. H.S. Lahman already wrote about it in another post, so I need not to repeat it here. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
"H. S. Lahman" <hsl@pathfindermda.com> writes: > First, I think it is important to clarify that the 'relational' in > the mismatch isn't referring to the fact that the OO paradigm uses > something other than set theory's relational model. The nature of > the impedance mismatch lies in the way the OO and RDB paradigms > implement the same relational model. Huh? I know of the lambda calculus as a foundation of functional languages and the relation model as foundation of RDBs. But I wonder what the formal foundations of the OO family of languages is (references, please). As far as I understand those matters I would say the relational model is more abstract than the OO model (only the most common language use for the relational model, SQL, is... improvable). -- Stefan.
On Tue, 4 Mar 2008 20:26:01 +0000, Eric wrote: > On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >> On Tue, 4 Mar 2008 17:58:02 +0000, Eric wrote: >> >>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>>> On Tue, 4 Mar 2008 15:41:40 +0000, Eric wrote: >>>> >>>>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>>>>> On Mon, 3 Mar 2008 23:03:41 +0000, Eric wrote: >>>>>> >>>>>>> No, RDBs partition data so that it is sensibly and easily available to >>>>>>> any possible application. So if you use OO you are saying "there will >>>>>>> never be any other application that will need my data". >>>>>> >>>>>> No, it is engineering which says so. It translates as "put the requirements >>>>>> first," or simpler "pigs do not fly." >>>>> >>>>> So no-one ever says "we should be able to get that stuff out of the xyz >>>>> application and combine it with our data so that we can..."! >>>> >>>> You should plan this use case in advance. That would be a requirement. A >>>> system can only do things it was designed for. (This applies to RDBMS as >>>> well). For each application exist things it cannot do. That implies: either >>>> A) there will never be any other application that will ask to do these, or >>>> B) the application is incorrect (= does not fulfill the requirements). >>>> >>> So you will always know, in advance, what all the possible future >>> applications will want! I hope you are not crazy enough to believe that. >> >> No, I am. When I am looking for a solution I have to know what is the >> problem. Is that crazy? Further, dealing with a generalized problem I shall >> consider what would be the consequences of such generalization. There is >> always a price to pay. You certainly have heard about computability, NP >> problems and such stuff. But just going from 1ms to 100�s makes a huge >> difference. > > Everything has a price. You have to choose. What I see is someone taking > only the short-term view. Huh, now after all these cries that DB is not about persistency... What makes a temporal aspect so relevant then? >>> So you are left with minimising "things it cannot do". I guess that >>> means you should have something which can make the data available to any >>> application that asks, according to any logically possible criterion. >>> Did you know that this is what an RDBMS does? >> >> No it does not, when "asking" is defined as diffuse as in the natural >> language. There exist certain limitations on what and how can be asked. > > That's what I said - logically possible criteria. What about things which cannot be spelt in SQL? What about response times? Can you specify/guess an upper bound for all requests? For a certain subset of? >> These limitations should be specified as functional and non-functional >> requirements. > > If possible. What I meant was that you should minimise the limitations > on both the expected and the unknown futures. No optimum exists under these conditions. >> If you prefer to buy a cat in the bag named RDBMS (or >> whatever), that's up to you. I merely state that there is always something >> in any bag. > > Cat? What cat? But actually, see what I said above about price. You said that the price has to be paid. Right, but the question is about performance/price ratio. You can buy a bigger car, but it would require more gasoline and it would be more difficult to park. Software developing is an expensive thing. >> As for the bag RDBMS, among the thing it contains are >> object-relational impedance, > > You made this one up because you don't understand. What didn't I? That impedance exists or that it does not? >> SQL, > > OK, it's not the perfect language, but what is? And it is possible to > have an RDBMS that doesn't use it. > >> poor performance, > > Relative to what? Where are the tests? Do you install an RDBMS product > and just go with whatever myths you have heard lately, or do you get a > product specialist to sort it out? Come on, show me the nearest neighbour search in ten-dimensional space implemented in RDBMS. What would be the complexity of? You should clearly understand that it is possible to break the neck of *any* indexing method. This refutes the argument to "any logically possible criterion.". >> unpredictable behavior, > > Please explain. Unless you're talking about bugs, but everything has > those. And what are the means available in order to prevent bugs? How much RDBMS support static analysis? SQL is practically untyped. Design by contract, how? Code reuse is close to none, well, code is evil, why should we reuse it? Upper bounds for memory footprint? For response times? >> maintenance costs, > > Everything has those too. Again, I have to assume that you take only the > short-term view. No,I mean long-term maintenance costs. >> etc. > > What else would you like to make up? Actually I don't want to concentrate on critique of RDBMS. It is a hardware to me. I would buy one in case I needed it. My objective is rather data-centric view. Which is IMO the reason for object-relational impedance. BTW, I see nothing wrong in RA, which has in my view fully independent on the notion of data. RA would nicely fit into OO as a set of types with corresponding operations. >>> Perhaps not, since you have also said that "data are irrelevant". >> >> Yes, I did. I am working mainly in the area of industrial data acquisition >> and control. It might sound funny, but being so close to "data" one starts >> to better understand why data are irrelevant. > > Aha! Your data is transient, and what you are mostly doing is > transforming it. Well, one viewed it this way in 50-60s, I guess. But it is a long time since one dropped this data-centric view on the system as a huge signal filter. This model does not scale and is inadequate (event controlled and non-numeric things, GUI etc). > I at least have no problem with using OO programming > for that. Also, that explains your short-term view. But what do you do > with the data (presumably transformed) that does get kept for longer? > Put it somewhere that will be available for a variety of expected and > unexpected uses? But we were here before! Yes, here we go again. Data are meaningless if usage is unexpected. Nobody can use a CD-ROM in a wind-up phonograph, deaf people notably. The system does not keep anything it exists and behaves. Deployment of DB there is always problematic. There are much ongoing efforts in this area in recent years, mainly to standardize the schemas. That is not enough, because relational model does not fit. Channels are largely event controlled with time stamps. So you cannot make any reasonable relations beyond (time, value) without data corruption. The queries would be like "give me the oil temperature profile when the velocity was out of range for longer than 10s before the event E." You need various interpolation methods, calculated channels and ones simulated from previously recorded measurements. Channels are created and destroyed, their properties change. The end effect is that when DBs are used then only marginally. I remember an amusing customer requirement: "we want to be able to run our tests even if the DB server is off-line." (:-)) -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Mar 3, 3:11=A0pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-03 12:29:02 -0600, TroyK <cs_tr...@juno.com> said: > > > My experience is somewhere between 2 and 3 orders of magnitude > > difference between implementing a business rules change in the db vs. > > the programming team doing it in OO code. > > Then you should be able to fly rings around the programmers and get > them all fired. =A0Why haven't you? If by "fly rings around the programmers" you mean having a fully functioning reference implementation up and running in SQL within 2 weeks that ends up taking a team of 3 programmers over 3 months to implement in code, then, yeah, I guess I do. But the architecture called for the programming to be done in a business layer implemented in C# -- we expected and planned for that, so, happily, no one gets fired. > Ladies and gentlemen, there are certainly tasks that are better suited > to SQL and stored procedures. =A0There are other tasks that are better Who said anything about stored procedures? I'm talking about implementing the business rules via constraint declaration in the database, and deriving new values throught the application of SQL queries. > suited to general purpose languages. =A0True wisdom comes from knowing > the strengths and weaknesses of both. =A0Good architects build systems > that combine the tools synergistically. And good agile programmers know to use a high-level language in order to enable iterating over design at a rapid pace. TroyK
Stefan Nobis <snobis@gmx.de> writes: >But I wonder what the formal foundations >of the OO family of languages is �The formal foundation of a family of languages� is not a well-specified term. What is a �formal foundation�? The syntax and semantics of any specific object-oriented language is given in the specification of that language. But I do not know when you will deem a specification to be �formal� or a �formal foundation�.
On Mar 4, 3:39 pm, r...@zedat.fu-berlin.de (Stefan Ram) wrote: > Stefan Nobis <sno...@gmx.de> writes: > >But I wonder what the formal foundations > >of the OO family of languages is > > =BBThe formal foundation of a family of languages=AB is not a > well-specified term. What is a =BBformal foundation=AB? > The syntax and semantics of any specific object-oriented > language is given in the specification of that language. If it has one. And even if it has a spec, is it formal spec, or a prose spec? The Java spec for example is not formal. Still, that's better than you get from most. With many languages the "spec" is the reference implementation. > But I do not know when you will deem a specification to be > =BBformal=AB or a =BBformal foundation=AB. Start here: http://en.wikipedia.org/wiki/Formal_language Marshall
On 2008-03-03 16:49:29 -0600, "David Cressey" <cressey73@verizon.net> said: > It doesn't seem as > straightforward to me to extend a language like C++ so that it becomes > suitable for declaring relational transformations on data. If it is > straight forward, then I'd like to hear from people who are doing it. Take a look at LINQ. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 16:49:29 -0600, "David Cressey" <cressey73@verizon.net> said: > But the idea of a single language that is suitable for everything remains an > elusive goal, and probably an unproductive endeavor. Agreed. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 16:28:46 -0600, "David Cressey" <cressey73@verizon.net> said: > > "Roy Hann" <specially@processed.almost.meat> wrote in message > news:nfWdnW05its1o1HanZ2dnUVZ8t2snZ2d@pipex.net... > > >> Until I know their reasons for their views on data structures I couldn't >> say. However I notice that I am surrounded by programmers who consume > most >> the development budget writing code, and when a change request comes along > I >> can accommodate it in the database in minutes and they spend months > spewing >> out more code (sometimes after doing an extensive and expensive impact >> assessment). Code may not be evil, but it sure has a case to answer. > > I have to agree with this comment, based on my own experience. OK, then I'll ask you the same question I asked Roy. If you *could have* done it in minutes, why didn't you? -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 17:25:48 -0600, topmind <topmind@technologist.com> said: > But I think anybody inspecting both examples will clearly see that my > version is a lot less total code. C++ is a pretty wordy language. If I wrote it in Ruby I bet I'd beat you by a wide margin. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 17:06:05 -0600, JOG <jog@cs.nott.ac.uk> said: > A join of the two statements gave me the inference I required: {Name, > Mortality}. All of a sudden it seemed simple. Interesting story. Yes, when you have a problem of inference, it's good to use an inference engine. > So some questions: > > 1) So why not treat all 'inheritance' in this way? Because all inheritance is not about inference. > 2) Could one extend to include 'behaviour' as well? Yes. See the Prolog language. > 3) And is this a crazy thing to suggest in a cross post to an OO > group? I've seen a lot crazier things. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 18:24:33 -0600, Marshall <marshall.spight@gmail.com> said: > One's requirements dictate data with certain functional dependencies. > Among the easiest tasks in application design is one of the earliest: > structuring the data. The RM structures data as relations. Relational > structures lack query bias, which is one of the reasons why SQL > is so good at ad hoc queries (compared to the other choices.) > OO structures mandates an object structure (unsurprisingly): out the > gate, one has no choice but to build in a query bias, whether > one wants it or not. Agreed. That "bias" can work very well if you only use the biased queries. That works well in the context of a single application, or a single piece of an application. It does not work well in the general case, which is why OODBs never quite took off. > > Furthermore, since OOPLs lack physical independence, traversing > the graph may be quite expensive, particularly in the case where > the graph is backed by storage in a database, which is part of > why ORM is such a universally bad idea. No, you have this wrong. ORMs generally use standard SQL queries to traverse and gather data from the DB. Then that data is placed into OO structures so that the application can take advanage of the bias. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 16:31:43 -0600, "David Cressey" <cressey73@verizon.net> said: > > "Bob Badour" <bbadour@pei.sympatico.ca> wrote in message > news:47cc383f$0$4041$9a566e8b@news.aliant.net... > > >> It's pretty obvious to me: object-relational mismatch is to relations as >> assembler-object mismatch is to objects. > > I didn't get this comment. Now that someone else flagged it as a > keeper, I feel the need to ask for an explanation. Explanation: "WAA WAA WAA WAA WAA WAA". (The sound made by the adults in the Peanuts cartoons). -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-03 18:34:39 -0600, Marshall <marshall.spight@gmail.com> said: > In SQL, if I have two relations with x and y int columns, I can > union them, or join on them, or whatever. There is no way, > in fact, to forbid such a thing, just like in Java there is no way > to allow such a thing. You are confusing OO with static typing. In OO languages like Ruby, Python, or Smalltalk you can pass any object to any function irrespective of type. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On Wed, 5 Mar 2008 00:46:14 -0600, Robert Martin wrote: > On 2008-03-03 16:49:29 -0600, "David Cressey" <cressey73@verizon.net> said: > >> But the idea of a single language that is suitable for everything remains an >> elusive goal, and probably an unproductive endeavor. > > Agreed. Disagreed. The idea of multilingual system is the most damaging thing in software developing history. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Wed, 5 Mar 2008 01:10:01 -0600, Robert Martin wrote: > On 2008-03-03 18:34:39 -0600, Marshall <marshall.spight@gmail.com> said: > >> In SQL, if I have two relations with x and y int columns, I can >> union them, or join on them, or whatever. There is no way, >> in fact, to forbid such a thing, just like in Java there is no way >> to allow such a thing. > > You are confusing OO with static typing. In OO languages like Ruby, > Python, or Smalltalk you can pass any object to any function > irrespective of type. Which is a bad idea. Nevertheless you don't need dynamic typing in order to deal with that. You could have a class of relations in order to define operations (like join) on them. That will give a static type to the result of any join. The problem is elsewhere. How do I know in *advance* that the result of join is a relation of certain narrower kind? Both "statically untyped" SQL and "dynamically untyped" fancy languages have no answer to that until run time. Note that this is a software design issue. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
Stefan Nobis wrote: > "H. S. Lahman" <hsl@pathfindermda.com> writes: >>First, I think it is important to clarify that the 'relational' in >>the mismatch isn't referring to the fact that the OO paradigm uses >>something other than set theory's relational model. The nature of >>the impedance mismatch lies in the way the OO and RDB paradigms >>implement the same relational model. > Huh? I know of the lambda calculus as a foundation of functional > languages and the relation model as foundation of RDBs. But I wonder > what the formal foundations of the OO family of languages is > (references, please). 1. The prog lang (Simula) came first, not the formalisms. Which then inspired +/- aligned with other notions (ADTs etc) that were present/emerging in the CS community at that time. 2. The closest basic formalism that OO could be mapped to is ADTs. ADT theory uses a number of formalisms (algebraic specification, type theories etc) to define the behaviour of ADTs. But these formalisms use the same 'language' (propositional/predicate logic, set theory etc) as the Relational model. > As far as I understand those matters I would say the relational model > is more abstract than the OO model The ADT formalisms suggest otherwise. > (only the most common language use for the relational model, SQL, is... improvable). Indeed. Regards, Steven Perryman
![]() |
0 |
![]() |
Robert Martin wrote: > On 2008-03-03 18:34:39 -0600, Marshall <marshall.spight@gmail.com> said: >> In SQL, if I have two relations with x and y int columns, I can >> union them, or join on them, or whatever. There is no way, >> in fact, to forbid such a thing, just like in Java there is no way >> to allow such a thing. > You are confusing OO with static typing. In OO languages like Ruby, > Python, or Smalltalk you can pass any object to any function > irrespective of type. And you (both) are equating the (strong) typing model of Simula as the only strong typing model. Go and look at Functional prog langs etc for examples of how Marshalls' gripe would be done in a type-safe manner. Regards, Steven Perryman
![]() |
0 |
![]() |
On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Tue, 04 Mar 2008 19:45:30 GMT, David Cressey wrote: > <snip> >> >> This is true for an RDBMS. But more to the point, can a relational (or >> SQL) database be designed in such a way that it has moderately good support >> for thousands of anticipated queries, of which only a few dozen will >> actually come to be used ? And will those few dozens of uses provide the >> appropriate payback on the investment in building the database? > >> Based on my experience with databases, I offer the firm opinion that the >> answer is yes. > > That's OK to me. Good. > But the question, as I understood it, was about > qualitative design changes. My point was that there is always a presumption > of the nature of changes which can and of those that cannot happen. A > design is good when this presumption matches the reality. From this point > of view there is no difference between deployment of OO or RDB solutions. > As engineering both would solve some class of problems and anticipate > quantitative changes of certain kind, but not qualitative ones. > > When I read what Eric wrote, that made me think that he believed that RDB > would solve all problems and thus anticipate any changes. This would be > obviously wrong. I don't believe that merely using an RDBMS will solve all problems. What I meant was that, accepting what David said above, if you keep your data in an RDBMS, it will be easily available for the solution of any possible problem that can be solved using that data. > I guess that his hidden argument was "because RDB > apparently solves all problems, then data-centric view is the right one." No, the argument is that the data-centric view is often (not necessarily always) the right one to take, and that keeping the data in an RDBMS provides a better basis for solving future problems using the data than any other way of keeping it. > Yet another logical fallacy was a data-centric problem statement: "there > will never be any other application that will need my data," used in order > to prove data-centric view itself. There is no fallacy here. First there is a fact - you have data, however you choose to view it. Do you want to deny this? If you do we may need to sort out the meaning of the word "data". Given this fact, I am saying that you can not guarantee that there will be no other uses for your data, so you must keep open a way for those other uses to see your data. > In OO problems are not modeled in terms > of applications using data. No, but the data is still there. > This alone does not yet imply anything about > the problems being solved. Actually we all are solving similar problems, > otherwise there would be no such quarrel between us. > > P.S. OO is especially focused on maintenance. H.S. Lahman already wrote > about it in another post, so I need not to repeat it here. > E
On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Tue, 4 Mar 2008 20:26:01 +0000, Eric wrote: > <snip> >> >> That's what I said - logically possible criteria. > > What about things which cannot be spelt in SQL? Use the query language (not necessarily SQL) to pass the data to something that can deal with it. > What about response times? > Can you specify/guess an upper bound for all requests? For a certain subset > of? This is just a prejudice - get the right RDBMS and the right expert to tune it (specifically for what it must do, not generically) and you might be surprised. > >>> These limitations should be specified as functional and non-functional >>> requirements. >> >> If possible. What I meant was that you should minimise the limitations >> on both the expected and the unknown futures. > > No optimum exists under these conditions. > >>> If you prefer to buy a cat in the bag named RDBMS (or >>> whatever), that's up to you. I merely state that there is always something >>> in any bag. >> >> Cat? What cat? But actually, see what I said above about price. > > You said that the price has to be paid. Right, but the question is about > performance/price ratio. You can buy a bigger car, but it would require > more gasoline and it would be more difficult to park. Software developing > is an expensive thing. > >>> As for the bag RDBMS, among the thing it contains are >>> object-relational impedance, >> >> You made this one up because you don't understand. > > What didn't I? That impedance exists or that it does not? > >>> SQL, >> >> OK, it's not the perfect language, but what is? And it is possible to >> have an RDBMS that doesn't use it. >> >>> poor performance, >> >> Relative to what? Where are the tests? Do you install an RDBMS product >> and just go with whatever myths you have heard lately, or do you get a >> product specialist to sort it out? > > Come on, show me the nearest neighbour search in ten-dimensional space > implemented in RDBMS. What would be the complexity of? You should clearly > understand that it is possible to break the neck of *any* indexing method. > This refutes the argument to "any logically possible criterion.". > But I don't believe that an RDBMS can do everything. I would not be surprised to find the data defining the ten-dimensional space in an RDBMS, but I would never expect to do such a calculation in the query language. >>> unpredictable behavior, >> >> Please explain. Unless you're talking about bugs, but everything has >> those. > > And what are the means available in order to prevent bugs? How much RDBMS > support static analysis? SQL is practically untyped. Design by contract, > how? Code reuse is close to none, well, code is evil, why should we reuse > it? Upper bounds for memory footprint? For response times? > >>> maintenance costs, >> >> Everything has those too. Again, I have to assume that you take only the >> short-term view. > > No,I mean long-term maintenance costs. > >>> etc. >> >> What else would you like to make up? > > Actually I don't want to concentrate on critique of RDBMS. It is a hardware > to me. I would buy one in case I needed it. > > My objective is rather data-centric view. Which is IMO the reason for > object-relational impedance. BTW, I see nothing wrong in RA, which has in > my view fully independent on the notion of data. RA would nicely fit into > OO as a set of types with corresponding operations. > >>>> Perhaps not, since you have also said that "data are irrelevant". >>> >>> Yes, I did. I am working mainly in the area of industrial data acquisition >>> and control. It might sound funny, but being so close to "data" one starts >>> to better understand why data are irrelevant. >> >> Aha! Your data is transient, and what you are mostly doing is >> transforming it. > > Well, one viewed it this way in 50-60s, I guess. But it is a long time > since one dropped this data-centric view on the system as a huge signal > filter. This model does not scale and is inadequate (event controlled and > non-numeric things, GUI etc). > >> I at least have no problem with using OO programming >> for that. Also, that explains your short-term view. But what do you do >> with the data (presumably transformed) that does get kept for longer? >> Put it somewhere that will be available for a variety of expected and >> unexpected uses? But we were here before! > > Yes, here we go again. Data are meaningless if usage is unexpected. Nobody > can use a CD-ROM in a wind-up phonograph, deaf people notably. No collection of data is useful to everybody, so deaf people have got nothing to do with it. Other than that, you have just demonstrated a lack of understanding of the difference between the logical and the physical. It is possible for me to collect the necessary bits of technology to transfer a piece of music from a CD-ROM to a disc or cylinder for the wind-up phonograph. It is still the same piece of music. > > The system does not keep anything it exists and behaves. But it has inputs and outputs. There may also be a need for it to record some of its behaviour. There may be a reason to keep some of the outputs, or even the inputs (for later extended analysis?). If you have a system that genuinely keeps nothing, I have no argument with how you choose to create it, as long as it works. > Deployment of DB > there is always problematic. There are much ongoing efforts in this area in > recent years, mainly to standardize the schemas. That is not enough, > because relational model does not fit. Channels are largely event > controlled with time stamps. So you cannot make any reasonable relations > beyond (time, value) without data corruption. The queries would be like > "give me the oil temperature profile when the velocity was out of range for > longer than 10s before the event E." You need various interpolation > methods, calculated channels and ones simulated from previously recorded > measurements. Channels are created and destroyed, their properties change. > The end effect is that when DBs are used then only marginally. I remember > an amusing customer requirement: "we want to be able to run our tests even > if the DB server is off-line." (:-)) > E
On 2008-03-05, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Wed, 5 Mar 2008 00:46:14 -0600, Robert Martin wrote: > >> On 2008-03-03 16:49:29 -0600, "David Cressey" <cressey73@verizon.net> said: >> >>> But the idea of a single language that is suitable for everything remains an >>> elusive goal, and probably an unproductive endeavor. >> >> Agreed. > > Disagreed. > > The idea of multilingual system is the most damaging thing in software > developing history. > Well, that opinion explains a lot of the things you have said. You also appear to have assumed that other people agreed, leading you to misinterpret what they said. If you should not use multiple languages, there must be a universal language. What is it, is it really universal _right now_, and if not, when will it be and what should we do in the meantime? E
On Mar 5, 6:57 am, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-03 17:06:05 -0600, JOG <j...@cs.nott.ac.uk> said: > > > A join of the two statements gave me the inference I required: {Name, > > Mortality}. All of a sudden it seemed simple. > > Interesting story. Yes, when you have a problem of inference, it's > good to use an inference engine. > > > So some questions: > > > 1) So why not treat all 'inheritance' in this way? > > Because all inheritance is not about inference. Hmmm. Then might you give an example of a situation where inheritance cannot be described in terms of inference? > > > 2) Could one extend to include 'behaviour' as well? > > Yes. See the Prolog language. > > > 3) And is this a crazy thing to suggest in a cross post to an OO > > group? > > I've seen a lot crazier things. > > -- > Robert C. Martin (Uncle Bob) | email: uncle...@objectmentor.com > Object Mentor Inc. | blog: www.butunclebob.com > The Agile Transition Experts | web: www.objectmentor.com > 800-338-6716 |
Dmitry A. Kazakov wrote: > On Wed, 5 Mar 2008 01:10:01 -0600, Robert Martin wrote: > > >> <snip> >> You are confusing OO with static typing. In OO languages like Ruby, >> Python, or Smalltalk you can pass any object to any function >> irrespective of type. >> > > Which is a bad idea. > Why? > Nevertheless you don't need dynamic typing in order to deal with that. You > could have a class of relations in order to define operations (like join) > on them. That will give a static type to the result of any join. > At what cost given a similar benefit can be gained without the extra effort? > The problem is elsewhere. How do I know in *advance* that the result of > join is a relation of certain narrower kind? Both "statically untyped" SQL > and "dynamically untyped" fancy languages have no answer to that until run > time. Note that this is a software design issue. > The result is what it is. If it answers the messages sent it predictably what does it matter? -- Visit <http://blogs.instreamco.com/anything.php> to read my rants on technology and the finance industry. Visit <http://tggagne.blogspot.com/> for politics, society and culture.
Leslie Sanford wrote: > "JOG" wrote: > >>This reminds me of a serious 'click' moment I had with data >>structures. A long time ago, in a galaxy far far away, I once sat >>blindly reinventing my own network model, with identifiers, pointers, >>types, and all the fun of the fair. As an OO programmer it was the >>only mindset I had. Then I considered encoding data similar to the >>classic example of: >> >>* Aristotle is a man >>* All men are mortal >>* |= Aristotle is mortal >> >>Well, /clearly/ what I was dealing with here was a generic entity >>class, of which Man was a subclass, and Aristotle an instance. >>Something like: >> >>class Entity >>{ >>boolean mortal; >>string name; >>Entity(_name, _mortal) : name(_name), mortal(_mortal); >>}; >> >>class Man : public Entity >>{ >>date bday; >>Man(_name, _mortal) : Entity(_name, _mortal), bday(_bday); >>}; >> >> >>But, as I extended the example, the structures got more convoluted, >>and the result more and more of a mess. Finally something clicked. It >>wasn't about types, objects or inheritance, it was about /inference/, >>and what I actually had was: >> >>Name(x, Aristotle) -> Species(x, Man) >>Species(x, Man) -> Mortality(x, Mortal) >>|= Name(x, Aristotle) -> Mortalilty(x, Mortal) >> >>No types or reification in sight. Instead I had two groups of >>statements: >>People = {Name, Species, Bday} >>Entities = {Species, Mortality} >> >>A join of the two statements gave me the inference I required: {Name, >>Mortality}. All of a sudden it seemed simple. So some questions: >> >>1) So why not treat all 'inheritance' in this way? >>2) Could one extend to include 'behaviour' as well? >>3) And is this a crazy thing to suggest in a cross post to an OO >>group? > > > I'm jumping in realizing that I may be revealing a bit of ignorance here, so > bear with me. > > If I need to send a message to all persons who are mortal, I could join > People and Entities together by species and where Mortality is true? > > Once I've performed the join, I then dispatch a message to the resulting > group: "GetAnnualCheckUp()" or something? > > In the application I'm currently writing, I have a large list of parameters. > These parameters share some values such as Name and Label. However, there > are different types of parameters such as those that represent a boolean > state ("on/off"), a selection of choices, e.g. "Sine", "Sawtooth", "Square", > etc., and other types. My current approach is traditional OO, and that's to > have a hierarchy of Parameter classes. I can then keep a list of > heterogeneous parameters and treat them polymorphically. For example: > > params[AmplitudeId] = new FloatParameter("Amplitude", "dB", 0.0f, 1.0f); > params[WaveId] = new SelectionParameter("Wave", "Type", WaveNames); > > // Set a parameter value. All raw parameter values are in the range > // of [0, 1]. Each parameter class knows how to transform raw parameter > // values in an appropriate way. > params[parameterId].SetValue(value); > > And all works well. > > I'm trying to wrap my head around how I would approach this problem using > what you've described above. I could have a table representing attributes > common to all parameters. Then tables representing attributes specific to > one parameter type or another. I could relate the tables via foreign keys? > When I need to dispatch parameter changes, I could join the appropriate > tables and change the appropriate values? > > I'm open to new ways of approaching things, but the mechanism to implement > all of this needs to be fast, in my case. Parameter changes can come in at > hundreds of times a second. Leslie, Relationally, I would approach what you are doing almost exactly as you are now. The data type for parameter values would be a union type with sub-types as appropriate. Whether one has a single base relation with all parameter types or one has a different base relation for each sub-type becomes almost moot. No matter which one chooses, one can have both using views. Either a restrict view to derive each sub-type relation from the single base relation or a single union view to derive the super-type relation from the individual sub-type base relations. If this is a subject that interests you, I highly recommend Fabian Pascal's _Practical Issues in Database Management ..._
H. S. Lahman wrote: > Responding to JOG... > >>> All attempts by applications to access a DB's tables and columns >>> directly violates design principles that guard against close-coupling. >>> This is a basic design tenet for OO. Violating it when jumping from OO >>> to RDB is, I think, the source of problem that are collectively and >>> popularly referred to as the object-relational impedance mismatch. >> >> I wondered if we might be able to come up with some agreement on what >> object-relational impedence mismatch actually means. I always thought >> the mismatch was centred on the issue that a single object != single >> tuple, but it appears there may be more to it than that. > > First, I think it is important to clarify that the 'relational' in the > mismatch isn't referring to the fact that the OO paradigm uses something > other than set theory's relational model. The nature of the impedance > mismatch lies in the way the OO and RDB paradigms implement the same > relational model. Frankly, the paragraph above is nonsense. > I think the lack of 1:1 tuple mapping is just a symptom of the mismatch. > There are several contributors to the mismatch... > > Applications (not just OO) are designed to solve specific problems so > they are highly tailored to the particular problem in hand. In contrast, > databases are designed to provide ad hoc, generic access to data that is > independent of particular problem contexts. If one had to choose a > single characterization of the mismatch, this would be it; everything > else stems from it. > > Object properties include behavior. Behaviors interact in much more > complex ways than data. Managing behaviors is the primary cause of > failing to map 1:1 between OO Class Diagrams and Data Models of the same > subject matter. That's because managing behavior places additional > constraints on the way the software is constructed. The above three paragraphs are not much better. > OO relationships are instantiated at the object (tuple) level rather > than the class (table) level. Equating object instances with tuples and object classes with relations is a great blunder. One of two great blunders detailed in Date's and Darwen's _The Third Manifesto_. A tuple is actually a logical join of N object values. An object class extent is a unary relation. OO has no concept equal to a binary relation--let alone an n-ary relation. [remaining nonsense snipped]
On Mar 4, 11:05 pm, Robert Martin <uncle...@objectmentor.com> wrote: > > > > Furthermore, since OOPLs lack physical independence, traversing > > the graph may be quite expensive, particularly in the case where > > the graph is backed by storage in a database, which is part of > > why ORM is such a universally bad idea. > > No, you have this wrong. ORMs generally use standard SQL queries to > traverse and gather data from the DB. Then that data is placed into OO > structures so that the application can take advanage of the bias. Just the fact that they use SQL isn't sufficient. They have to use it as well as a person could, though an interface that is generally information-lossy enough (or at least, used in a lossy way) that that's impossible. The most gratuitous example I can think of was some early EJB containers I played with, back when I was still thinking that ORM was something that could possibly be done well. Against a table of a few hundred rows, one could execute "delete from table". The comparable command through the ORM issued SQL to load every row as an object, then in a loop called obj.delete() which issued a single DELETE statement for that row. It was ten thousand times slower, and that's for only a couple hundred rows. Of course this example is extreme, but it's still illustrative of a general principle. I have *often* seen four and five order of magnitude performance difference between straight SQL and ORM SQL, across a wide variety of ORMs. The very idea of ORM demands it: you have to try to push a whole set-oriented language through a functional interface. Marshall
Responding to Nobis... >> First, I think it is important to clarify that the 'relational' in >> the mismatch isn't referring to the fact that the OO paradigm uses >> something other than set theory's relational model. The nature of >> the impedance mismatch lies in the way the OO and RDB paradigms >> implement the same relational model. > > Huh? I know of the lambda calculus as a foundation of functional > languages and the relation model as foundation of RDBs. But I wonder > what the formal foundations of the OO family of languages is > (references, please). The foundation is much broader because of the need to provide dynamic elements (i.e., the relational model is a subset of the OO foundation). However, the point in this context is that a UML Class Diagram *is* an Entity Relationship Diagram and it is normalized exactly the same way as an ERD Data Model. But the Class Diagram is just one view of the overall solution and it needs to play with other views so the semantics of construction are somewhat different, as I indicated in my post. -- There is nothing wrong with me that could not be cured by a capful of Drano. H. S. Lahman hsl@pathfindermda.com Pathfinder Solutions http://www.pathfindermda.com blog: http://pathfinderpeople.blogs.com/hslahman "Model-Based Translation: The Next Step in Agile Development". Email info@pathfindermda.com for your copy. Pathfinder is hiring: http://www.pathfindermda.com/about_us/careers_pos3.php. (888)OOA-PATH
On Mar 4, 11:10 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-03 18:34:39 -0600, Marshall <marshall.spi...@gmail.com> said: > > > In SQL, if I have two relations with x and y int columns, I can > > union them, or join on them, or whatever. There is no way, > > in fact, to forbid such a thing, just like in Java there is no way > > to allow such a thing. > > You are confusing OO with static typing. How supremely annoying to have gone to some lengths to carefully use the most strictly defined, modern type system terminology, only to have it labeled as a novice error by someone who missed my point entirely. At least you didn't say "duck typing." > In OO languages like Ruby, > Python, or Smalltalk you can pass any object to any function > irrespective of type. Actually, the dimension I was referring to is "nominal" vs. "structural" typing. This axis is independent of static vs. "dynamic" typing. Ruby, Python and Smalltalk all use structural, dynamic (aka runtime) typing. C++ uses static, nominal typing. Java uses a static, nominal type system with the addition of runtime types; SQL uses static, structural typing. I was speaking of nominal vs. structural. Marshall
H. S. Lahman wrote: > Responding to Nobis... > > >> First, I think it is important to clarify that the 'relational' in > >> the mismatch isn't referring to the fact that the OO paradigm uses > >> something other than set theory's relational model. The nature of > >> the impedance mismatch lies in the way the OO and RDB paradigms > >> implement the same relational model. > > > > Huh? I know of the lambda calculus as a foundation of functional > > languages and the relation model as foundation of RDBs. But I wonder > > what the formal foundations of the OO family of languages is > > (references, please). > > The foundation is much broader because of the need to provide dynamic > elements (i.e., the relational model is a subset of the OO foundation). This "subset" thing is perhaps misleading. While trying to figure out how to merge paradigms in another forum in order to make everybody happy and stop the bickering, we eventually generally agreed that paradigms are more about *constraints* than features. What a paradigm *doesn't* allow is often more important to defining it than what it does allow. The "no side-effects" rule of Functional Programming is an example. If one tries to create a language or tool that allows multiple paradigms, it often has to relax such constraints such that the essence of the original paradigm is lost. One cannot "reason" about the results as easily because they don't know which rules a given thing will abide by. Paradigms are tools to help human reasoning by providing rules for the "atoms" to follow. If there are too many rules, or a lack of rules, then it becomes a big ball of twine too hard to get one's head around because any assumption you rely on to mentally narrow down the possible paths something can take may just be all wrong. > > However, the point in this context is that a UML Class Diagram *is* an > Entity Relationship Diagram and it is normalized exactly the same way as > an ERD Data Model. But the Class Diagram is just one view of the overall > solution and it needs to play with other views so the semantics of > construction are somewhat different, as I indicated in my post. > > > -- > There is nothing wrong with me that could > not be cured by a capful of Drano. > > H. S. Lahman -T-
On Mar 4, 10:52 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-03 17:25:48 -0600, topmind <topm...@technologist.com> said: > > > But I think anybody inspecting both examples will clearly see that my > > version is a lot less total code. > > C++ is a pretty wordy language. If I wrote it in Ruby I bet I'd beat > you by a wide margin. I am a skeptical, but you are welcome to try. And, it would probably be the meta features of Ruby that cut it down, not OOP. > -- > Robert C. Martin (Uncle Bob) | email: uncle...@objectmentor.com > Object Mentor Inc. | blog: www.butunclebob.com > The Agile Transition Experts | web: www.objectmentor.com > 800-338-6716 | -T-
On Mar 5, 1:24 am, S Perryman <q...@q.com> wrote: > Robert Martin wrote: > > On 2008-03-03 18:34:39 -0600, Marshall <marshall.spi...@gmail.com> said: > >> In SQL, if I have two relations with x and y int columns, I can > >> union them, or join on them, or whatever. There is no way, > >> in fact, to forbid such a thing, just like in Java there is no way > >> to allow such a thing. > > You are confusing OO with static typing. In OO languages like Ruby, > > Python, or Smalltalk you can pass any object to any function > > irrespective of type. > > And you (both) are equating the (strong) typing model of Simula as the > only strong typing model. No I most certainly am not, and there isn't anything in my post that could lead one to conclude that I am. Marshall
On Mar 4, 1:43 pm, Stefan Nobis <sno...@gmx.de> wrote: > "H. S. Lahman" <h...@pathfindermda.com> writes: > > > First, I think it is important to clarify that the 'relational' in > > the mismatch isn't referring to the fact that the OO paradigm uses > > something other than set theory's relational model. The nature of > > the impedance mismatch lies in the way the OO and RDB paradigms > > implement the same relational model. > > Huh? I know of the lambda calculus as a foundation of functional > languages and the relation model as foundation of RDBs. But I wonder > what the formal foundations of the OO family of languages is > (references, please). The foundation of OO mostly hasn't been laid down yet. Little work has been done. If you are interested: http://www.google.com/search?q=theory+of+objects Abadi and Cardelli have done some work in this area. Cardelli in particular is a brilliant CS researcher; you cannot go wrong paying attention to anything he has to say. Marshall
On Mar 5, 7:58 am, "H. S. Lahman" <h...@pathfindermda.com> wrote: > > ... the relational model is a subset of the OO foundation ... Please describe where JOIN and UNION for example, are to be found in the OO foundation. Or in any OO language. Marshall
On Mar 5, 8:13 am, topmind <topm...@technologist.com> wrote: > > [...] we eventually generally agreed that > paradigms are more about *constraints* than features. What a paradigm > *doesn't* allow is often more important to defining it than what it > does allow. The "no side-effects" rule of Functional Programming is an > example. If one tries to create a language or tool that allows > multiple paradigms, it often has to relax such constraints such that > the essence of the original paradigm is lost. Nice post! Marshall
On Mar 3, 8:52 am, JOG <j...@cs.nott.ac.uk> wrote: > > I was hoping perhaps people might be able to offer > perspectives on the issues that they have encountered. Another big difference: Object oriented languages work in object-at-a-time terms. Even when those objects are collections, if one wants to operate on every object in the collection, one iterates over the objects in the collection and calls methods on those objects one at a time. The relational model works in set-at-a-time terms. One operates on entire sets at once. The two don't fit together very well. One context in which the set-at-a-time approach yields dramatic performance benefits is in distributed programming. It has been my experience that the single biggest performance issue in programming for the datacenter (as opposed to programming for single machines) is the nature of the protocol used across edges in the server graph. The more work the protocol can accomplish in a single message the better. Set-at-a-time thinking, and tools, are necessary. Marshall
Marshall wrote: > On Mar 5, 1:24 am, S Perryman <q...@q.com> wrote: >>And you (both) are equating the (strong) typing model of Simula as the >>only strong typing model. > No I most certainly am not, and there isn't anything in my post > that could lead one to conclude that I am. As clarified (thanks) by your subsequent posting to Robert Martin. Regards, Steven Perryman
![]() |
0 |
![]() |
On Mar 5, 9:22 am, S Perryman <q...@q.net> wrote: > Marshall wrote: > > On Mar 5, 1:24 am, S Perryman <q...@q.com> wrote: > >>And you (both) are equating the (strong) typing model of Simula as the > >>only strong typing model. > > No I most certainly am not, and there isn't anything in my post > > that could lead one to conclude that I am. > > As clarified (thanks) by your subsequent posting to Robert Martin. Uh, sure. Please excuse me if I was a bit hot above. Marshall
On Mar 5, 7:48=A0am, Marshall <marshall.spi...@gmail.com> wrote: > The most gratuitous example I can think of was some early > EJB containers I played with, back when I was still thinking > that ORM was something that could possibly be done well. > Against a table of a few hundred rows, one could execute > "delete from table". The comparable command through the > ORM issued SQL to load every row as an object, then > in a loop called obj.delete() which issued a single DELETE > statement for that row. It was ten thousand times slower, > and that's for only a couple hundred rows. Of course this > example is extreme, but it's still illustrative of a general > principle. > > I have *often* seen four and five order of magnitude > performance difference between straight SQL and > ORM SQL, across a wide variety of ORMs. The > very idea of ORM demands it: you have to try to > push a whole set-oriented language through a functional > interface. Yep, this is "drive car to supermarket" analogy in Stephane Faroult video Part 2: http://www.youtube.com/watch?v=3DGbZgnAINjUw It is a recurrent theme of application database performance meetings. Why set oriented processing is more friendly to optimisation? (Contrary to a moronic blanket statement that joins are bad). Because there is a little nice algebra behind it, so that optimization is essentially algebraic manipulation with query expressions. This is not an easy subject to master of course, so why some people have chosen to hide their head in the sand of imaginary code reorganisation problems is perfectly understandable.
Marshall wrote: > On Mar 3, 8:52 am, JOG <j...@cs.nott.ac.uk> wrote: >>I was hoping perhaps people might be able to offer >>perspectives on the issues that they have encountered. > Another big difference: > Object oriented languages work in object-at-a-time terms. > Even when those objects are collections, if one wants to > operate on every object in the collection, one iterates > over the objects in the collection and calls methods on > those objects one at a time. > The relational model works in set-at-a-time terms. One > operates on entire sets at once. This is a fallacy. In any system, if I have a set S of tuples (x,y) , and request the following : { e IN S : e.x = 123 } I have to examine each tuple in the set to find those that satisfy the predicate. The satisfying tuples do not appear by magic. > The two don't fit together very well. Particular *implementations* of OO prog langs may not fit well with a relational execution engine. But some (OO implemented on Functional programming infastructure etc) fit very well ( "lazy" programming etc) . Regards, Steven Perryman
![]() |
0 |
![]() |
On Mar 5, 9:31=A0am, S Perryman <q...@q.net> wrote: > Marshall wrote: > > On Mar 3, 8:52 am, JOG <j...@cs.nott.ac.uk> wrote: > >>I was hoping perhaps people might be able to offer > >>perspectives on the issues that they have encountered. > > Another big difference: > > Object oriented languages work in object-at-a-time terms. > > Even when those objects are collections, if one wants to > > operate on every object in the collection, one iterates > > over the objects in the collection and calls methods on > > those objects one at a time. > > The relational model works in set-at-a-time terms. One > > operates on entire sets at once. > > This is a fallacy. > In any system, if I have a set S of tuples (x,y) , and request the > following : > > { e IN S : e.x =3D 123 } > > I have to examine each tuple in the set to find those that satisfy the > predicate. The satisfying tuples do not appear by magic. Sigh: "magic optimization" (this is indeed a legitimate technical term) is not applied in this case. However, consider an index on column x. Suddenly the execution engine choses an access path more sophisticated than naive loop and application of a filter. The index management and leverage in the queries is completely transparrent from the user. This is one of the features that elevates relational model to high level. P.S. This de ja vu of the great debate is silly, one of the reason being that the caliber of participants is smaller.
Tegiri Nenashi wrote: > On Mar 5, 9:31 am, S Perryman <q...@q.net> wrote: >>Marshall wrote: >>>Object oriented languages work in object-at-a-time terms. >>>Even when those objects are collections, if one wants to >>>operate on every object in the collection, one iterates >>>over the objects in the collection and calls methods on >>>those objects one at a time. >>>The relational model works in set-at-a-time terms. One >>>operates on entire sets at once. >>This is a fallacy. >>In any system, if I have a set S of tuples (x,y) , and request the >>following : >>{ e IN S : e.x = 123 } >>I have to examine each tuple in the set to find those that satisfy the >>predicate. The satisfying tuples do not appear by magic. > Sigh: "magic optimization" (this is indeed a legitimate technical > term) is not applied in this case. However, consider an index on > column x. Sigh : in the text "{ e IN S : e.x = 123 }" there is no "index" . Merely a Relational expression. The use of index is an *implementation* technique. Argue about implementation techniques as much as you want. But you cannot claim that all OO prog langs will have an implementation that forces the fallacious "one at a time" scheme (Functional programming being a case in point) . Regards, Steven Perryman
![]() |
0 |
![]() |
On Mar 5, 9:16 am, Marshall <marshall.spi...@gmail.com> wrote: > > The relational model works in set-at-a-time terms. One > operates on entire sets at once. I forgot to mention: there's an amusing term used in this area, which comes from the APL family: "no stinking loops." http://www.nsl.com/ Marshall
On Mar 5, 9:31 am, S Perryman <q...@q.net> wrote: > Marshall wrote: > > On Mar 3, 8:52 am, JOG <j...@cs.nott.ac.uk> wrote: > >>I was hoping perhaps people might be able to offer > >>perspectives on the issues that they have encountered. > > Another big difference: > > Object oriented languages work in object-at-a-time terms. > > Even when those objects are collections, if one wants to > > operate on every object in the collection, one iterates > > over the objects in the collection and calls methods on > > those objects one at a time. > > The relational model works in set-at-a-time terms. One > > operates on entire sets at once. > > This is a fallacy. > In any system, if I have a set S of tuples (x,y) , and request the > following : > > { e IN S : e.x = 123 } > > I have to examine each tuple in the set to find those that satisfy the > predicate. The satisfying tuples do not appear by magic. I don't understand what point you're trying to make here. Are you talking about the language or the implementation? You use set-builder notation to describe a set. In the relational model, something very nearly identical would be used. It is a single set-oriented expression. In an OOPL, one would iterate over a collection. Inside a loop, one would find expressions or statements written against single objects. The semantics might be the same; the languages are different. > > The two don't fit together very well. > > Particular *implementations* of OO prog langs may not fit well with a > relational execution engine. But some (OO implemented on Functional > programming infastructure etc) fit very well ( "lazy" programming > etc) . Again, I don't understand what you're trying to say. I am discussing differences at the language level, not implementation differences. Yes, we could implement either a relational or an OO language with a functional SSA intermediate language; this doesn't affect what the abstractions of that language are, or whether they are set-oriented or object-oriented. Also, lazy vs. strict seems a completely orthogonal issue; I don't see why you bring it up. As an *implementation* point, when one puts the two different kinds on languages on the wire, one gets two different sorts of performance characteristics. These characteristics heavily favor set-oriented language. This isn't *necessarily* the case, true, but it is *actually* the case in every circumstance I'm aware of. Marshall
On Wed, 5 Mar 2008 12:22:24 +0000, Eric wrote: > If you should not use multiple languages, there must be a universal > language. Yep, it is called a universal/general purpose programming language. > What is it, is it really universal _right now_, and if not, It is, there exist many of them. The mistake you make is in a wrong presumption that "universal purpose" <=> "best possible." > when will it be and what should we do in the meantime? Not to develop pet domain-specific languages if the advantages of those are unclear. If you, say, wanted to create a declarative language based on inference, then that should be a universal purpose one. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Mar 5, 9:58 am, S Perryman <q...@q.com> wrote: > But you cannot claim that all OO prog langs will have an implementation > that forces the fallacious "one at a time" scheme Can you name one that doesn't? (I think I agree that one could exist in theory, however.) > (Functional programming being a case in point) . Why are you bringing up functional programming when talking about OOPLs? Are you thinking of map(), by chance? Marshall
On Wed, 05 Mar 2008 09:57:23 -0500, Thomas Gagne wrote: > Dmitry A. Kazakov wrote: >> On Wed, 5 Mar 2008 01:10:01 -0600, Robert Martin wrote: >> >>> <snip> >>> You are confusing OO with static typing. In OO languages like Ruby, >>> Python, or Smalltalk you can pass any object to any function >>> irrespective of type. >> >> Which is a bad idea. >> > Why? Because it is in fact untyped. >> Nevertheless you don't need dynamic typing in order to deal with that. You >> could have a class of relations in order to define operations (like join) >> on them. That will give a static type to the result of any join. > At what cost given a similar benefit can be gained without the extra effort? No extra efforts. If I wanted to mix types T1 and T2, I put them in the same class. It is purely a mental effort, which has been made. The class was identified, it is what DB people call "relation." But this is too little efforts to me. >> The problem is elsewhere. How do I know in *advance* that the result of >> join is a relation of certain narrower kind? Both "statically untyped" SQL >> and "dynamically untyped" fancy languages have no answer to that until run >> time. Note that this is a software design issue. >> > The result is what it is. If it answers the messages sent it > predictably what does it matter? How can you predict it? To answer to messages in a certain way is a behavior. Let's denote it T. Now, the question is about ~T. Do all possible objects expose T, so that ~T be empty? I bet some don't. Now, because T is not statically determinable we could not predict T. That matters. Clearly we cannot map all behavior to types in order to statically ensure it. But we have always try to do it as much as possible. Giving up without even a try is a bad idea from software design point of view. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
Marshall wrote: > On Mar 5, 9:58 am, S Perryman <q...@q.com> wrote: >>But you cannot claim that all OO prog langs will have an implementation >>that forces the fallacious "one at a time" scheme > Can you name one that doesn't? (I think I agree that > one could exist in theory, however.) CLOS, OCaml etc (by virtue of being built on Functional prog langs) . >>(Functional programming being a case in point) . > Why are you bringing up functional programming when > talking about OOPLs? Because FP has : - been able to support OO quite easily - has an interesting execution infrastructure that IMHO makes it a good candidate for supporting the Relational paradigm Given that the topic is "Object-relational impedence" , I am interested in anything that may be able to remove the impedence. And you ?? > Are you thinking of map(), by chance? Are you talking about the map function (map : Type x Func -> Type) ?? If so, no (in fact, no for any likely meaning of "map" ) . Regards, Steven Perryman
![]() |
0 |
![]() |
On Wed, 5 Mar 2008 11:34:05 +0000, Eric wrote: > On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > I don't believe that merely using an RDBMS will solve all problems. What > I meant was that, accepting what David said above, if you keep your data > in an RDBMS, it will be easily available for the solution of any > possible problem that can be solved using that data. No, this as well is wrong. Keeping "data" in RDBMS puts certain restrictions on what can be stored there and how it can be used later. You propose a solution (DB on a RDBMS). What was the problem? From your description the problem looks like a persistence layer. This what c.d.t. people vehemently disagree. Isn't it interesting to see how quickly a great theory of all degrades to storage? >> I guess that his hidden argument was "because RDB >> apparently solves all problems, then data-centric view is the right one." > > No, the argument is that the data-centric view is often (not necessarily > always) the right one to take, and that keeping the data in an RDBMS > provides a better basis for solving future problems using the data than > any other way of keeping it. Ah, that's better. So data-centric view is not universal => then there exist cases which cannot be viewed that way => data is not a fundamental term => data do not unconditionally exist. Are we in agreement? >> Yet another logical fallacy was a data-centric problem statement: "there >> will never be any other application that will need my data," used in order >> to prove data-centric view itself. > > There is no fallacy here. First there is a fact - you have data, however > you choose to view it. Do you want to deny this? Certainly. Non-observable things do not exist. But the fallacy is that you are putting existence of data as a fact from which you deduce data existence. >> In OO problems are not modeled in terms >> of applications using data. > > No, but the data is still there. In which sense? Laplace has answered this two hundred years ago: "I did not need to make such an assumption." -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
Marshall wrote: > On Mar 5, 9:31 am, S Perryman <q...@q.net> wrote: M>Object oriented languages work in object-at-a-time terms. M>Even when those objects are collections, if one wants to M>operate on every object in the collection, one iterates M>over the objects in the collection and calls methods on M>those objects one at a time. M>The relational model works in set-at-a-time terms. One M>operates on entire sets at once. >>This is a fallacy. >>In any system, if I have a set S of tuples (x,y) , and request the >>following : >>{ e IN S : e.x = 123 } >>I have to examine each tuple in the set to find those that satisfy the >>predicate. The satisfying tuples do not appear by magic. > I don't understand what point you're trying to make here. > Are you talking about the language or the implementation? Both. > You use set-builder notation to describe a set. In the > relational model, something very nearly identical would be > used. It is a single set-oriented expression. In an OOPL, > one would iterate over a collection. Inside a loop, one > would find expressions or statements written against > single objects. boolean f(Tuple t) { return (t.x = 123) ; } Set<Tuple> S ; Set<Tuple> t = S.match(f) ; // or match(S,f) if one prefers 1. How is the above not "set-oriented" ?? A set is given as input to a match operation which produces a set as output. 2. I have no idea whatsoever *how* S performs the match by looking at the above. > The semantics might be the same; the languages are different. The semantics are the same. The syntax may be different. The implementation (technique, performance) may well be the same. M>The two don't fit together very well. >>Particular *implementations* of OO prog langs may not fit well with a >>relational execution engine. But some (OO implemented on Functional >>programming infastructure etc) fit very well ( "lazy" programming >>etc) . > Again, I don't understand what you're trying to say. I am discussing > differences at the language level, not implementation differences. Are we arguing about whether one syntax for expressing set operations is better than the other ?? > Yes, we could implement either a relational or an OO language > with a functional SSA intermediate language; this doesn't affect > what the abstractions of that language are, or whether they > are set-oriented or object-oriented. > Also, lazy vs. strict seems a completely orthogonal issue; I don't > see why you bring it up. If you are debating language syntax (are you ?? ) , then no point whatsoever. > As an *implementation* point, when one puts the two different > kinds on languages on the wire When one puts different kinds of *implementations* of the same behaviour ... > one gets two different sorts of performance characteristics. Can you define these "characteristics" for us ?? > These characteristics heavily > favor set-oriented language. This isn't *necessarily* the case, > true, but it is *actually* the case in every circumstance > I'm aware of. Unable to comment without above definitions. Regards, Steven Perryman
![]() |
0 |
![]() |
On Wed, 5 Mar 2008 12:12:47 +0000, Eric wrote: > On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >> On Tue, 4 Mar 2008 20:26:01 +0000, Eric wrote: >> >>> That's what I said - logically possible criteria. >> >> What about things which cannot be spelt in SQL? > > Use the query language (not necessarily SQL) to pass the data to > something that can deal with it. This does not equate to "logically possible criteria." It is "something that can understand this." >> What about response times? >> Can you specify/guess an upper bound for all requests? For a certain subset >> of? > > This is just a prejudice - get the right RDBMS and the right expert to > tune it (specifically for what it must do, not generically) and you > might be surprised. This is not a prejudice, this is the point. I don't want to be surprised. Engineering is about predictability. >>> Relative to what? Where are the tests? Do you install an RDBMS product >>> and just go with whatever myths you have heard lately, or do you get a >>> product specialist to sort it out? >> >> Come on, show me the nearest neighbour search in ten-dimensional space >> implemented in RDBMS. What would be the complexity of? You should clearly >> understand that it is possible to break the neck of *any* indexing method. >> This refutes the argument to "any logically possible criterion.". > > But I don't believe that an RDBMS can do everything. I would not be > surprised to find the data defining the ten-dimensional space in an > RDBMS, but I would never expect to do such a calculation in the query > language. From this I conclude incompleteness of the approach. There is nothing bad in that. At this point you should have said, "OK, how could we reason about the applicability of this approach (and other approaches)." A framework where that could be done is that common ground where c.d.t. and c.o. would unite. >>> I at least have no problem with using OO programming >>> for that. Also, that explains your short-term view. But what do you do >>> with the data (presumably transformed) that does get kept for longer? >>> Put it somewhere that will be available for a variety of expected and >>> unexpected uses? But we were here before! >> >> Yes, here we go again. Data are meaningless if usage is unexpected. Nobody >> can use a CD-ROM in a wind-up phonograph, deaf people notably. > > No collection of data is useful to everybody, so deaf people have got > nothing to do with it. Theorem: for any collection of data, there exist somebody who does not need it. > Other than that, you have just demonstrated a lack of understanding of > the difference between the logical and the physical. It is possible for > me to collect the necessary bits of technology to transfer a piece of > music from a CD-ROM to a disc or cylinder for the wind-up phonograph. It > is still the same piece of music. No. Any logical is physical on another abstraction level. Ants do not listen to music. Information does not exist without a receiver. >> The system does not keep anything it exists and behaves. > > But it has inputs and outputs. There may also be a need for it to record > some of its behaviour. There may be a reason to keep some of the > outputs, or even the inputs (for later extended analysis?). If you have > a system that genuinely keeps nothing, I have no argument with how you > choose to create it, as long as it works. No you get me wrong. Of course we do logging, in fact a lot of. The system behaves as if it "kept" things. Whether it physically keeps them is an implementation detail. Consider sine. Does it keep its output? It is a meaningless question. You can tabulate sine and keep the table, you can use Chebyshev's polynomials, or you can ask an oracle. It is no matter. Sine is a behavior of real numbers. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
H. S. Lahman wrote: > Responding to Nobis... > >>> First, I think it is important to clarify that the 'relational' in >>> the mismatch isn't referring to the fact that the OO paradigm uses >>> something other than set theory's relational model. The nature of >>> the impedance mismatch lies in the way the OO and RDB paradigms >>> implement the same relational model. >> >> Huh? I know of the lambda calculus as a foundation of functional >> languages and the relation model as foundation of RDBs. But I wonder >> what the formal foundations of the OO family of languages is >> (references, please). > > The foundation is much broader because of the need to provide dynamic > elements (i.e., the relational model is a subset of the OO foundation). You are so full of shit your eyes must be brown. [nonsense snipped]
On Mar 5, 12:19 pm, rugby fan <q...@q.com> wrote: > Marshall wrote: > > On Mar 5, 9:58 am, S Perryman <q...@q.com> wrote: > >>But you cannot claim that all OO prog langs will have an implementation > >>that forces the fallacious "one at a time" scheme > > Can you name one that doesn't? (I think I agree that > > one could exist in theory, however.) > > CLOS, OCaml etc (by virtue of being built on Functional prog langs) . Well, hrmm. Let's see. While those are certainly *interesting* and *surprising* examples (in this context) I don't know that they qualify as counterexamples to my point. Take CLOS. It is an object system built on LISP. As such, is it fair to consider it in terms of the primitives or "axioms" that LISP is built on? (It may not be; the mechanisms required for its complex function invocation rules may strengthen the system; I'm not familiar enough with this to know.) Those primitives, car, cdr, cons, quote, atom, eq, cond) are none of them set-oriented. An argument could be made that some of them (car, cdr, cons) are list-oriented, but even in that case, it does not rise to the level I am thinking of. For example, if we consider how one processes all the items in a list, one finds a pattern (a very *nice* pattern, you understand; I'm not complaining about it) in which we take the head of the list, operate on it, and recurse with the rest of the list. Many good things can be said of this pattern, but it is not set-oriented. I am not familiar enough with OCaml to say much. But if we were talking SML, (a language I have the utmost respect for) I would still not consider it set-oriented. map(), filter(), fold(), while providing collection-oriented interfaces, are typically not primitive in the language. > >>(Functional programming being a case in point) . > > Why are you bringing up functional programming when > > talking about OOPLs? > > Because FP has : > > - been able to support OO quite easily > > - has an interesting execution infrastructure that IMHO makes it a > good candidate for supporting the Relational paradigm Hrmmm, well, I have to admit you make a good argument here. (Although you must grant me that, we people speak of OOPLs, they are not usually thinking CLOS!) But I am, rather, speaking of what languages' axiomatic natures are. I could, in fact, write a Relation class in Java, and provide set- oriented methods for it, and a whole host of relational goodness. But that wouldn't make Java a relational language per se. Virtually no languages have primitive support for anything like a collection. SQL and SETL and a few others; that's it. There are some languages that were designed from the start with *list* processing in mind: lisp (and I should probably also mention the APL family here.) There are some *very* interesting things in there, but not things I would say could be strictly described as set-oriented. > Given that the topic is "Object-relational impedence" , I am interested > in anything that may be able to remove the impedence. And you ?? > > > Are you thinking of map(), by chance? > > Are you talking about the map function (map : Type x Func -> Type) ?? > If so, no (in fact, no for any likely meaning of "map" ) . Okay. Marshall
Tegiri Nenashi wrote: > On Mar 5, 7:48=EF=BF=BDam, Marshall <marshall.spi...@gmail.com> wrote: > > The most gratuitous example I can think of was some early > > EJB containers I played with, back when I was still thinking > > that ORM was something that could possibly be done well. > > Against a table of a few hundred rows, one could execute > > "delete from table". The comparable command through the > > ORM issued SQL to load every row as an object, then > > in a loop called obj.delete() which issued a single DELETE > > statement for that row. It was ten thousand times slower, > > and that's for only a couple hundred rows. Of course this > > example is extreme, but it's still illustrative of a general > > principle. > > > > I have *often* seen four and five order of magnitude > > performance difference between straight SQL and > > ORM SQL, across a wide variety of ORMs. The > > very idea of ORM demands it: you have to try to > > push a whole set-oriented language through a functional > > interface. > > Yep, this is "drive car to supermarket" analogy in Stephane Faroult > video > Part 2: http://www.youtube.com/watch?v=3DGbZgnAINjUw > It is a recurrent theme of application database performance meetings. > > Why set oriented processing is more friendly to optimisation? > (Contrary to a moronic blanket statement that joins are bad). Because > there is a little nice algebra behind it, so that optimization is > essentially algebraic manipulation with query expressions. This is not > an easy subject to master of course, so why some people have chosen to > hide their head in the sand of imaginary code reorganisation problems > is perfectly understandable. I think SQL itself is a large part of the comprehension problem. In part, it doesn't easily allow one to break the problem down into smaller sub-problems for a mental divide-and-conquer. If the optimizer wants to re-lump it all together internally for efficiency, that's fine, but the query "formula" itself doesn't have to be written as one big lump. I've been kicking around an experimental relational language called SMEQL (yes a hint of LOR in the name) that is more "programmer friendly" and decomposable than SQL. And its designed to be DBA-extend- able due to its simpler syntax. It is roughly based on IBM's BS-12 experimental language from the early 70's. -T-
Dmitry A. Kazakov wrote: > On Wed, 5 Mar 2008 11:34:05 +0000, Eric wrote: > > > On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > > > I don't believe that merely using an RDBMS will solve all problems. What > > I meant was that, accepting what David said above, if you keep your data > > in an RDBMS, it will be easily available for the solution of any > > possible problem that can be solved using that data. > > No, this as well is wrong. Keeping "data" in RDBMS puts certain > restrictions on what can be stored there and how it can be used later. As far as "what can be stored", isn't this a vendor-implementation issue? Most RDBMS strive for efficiency over flexibility in column content, but the tradeoff can be tilted in favor of flexibility if need be. As far as "how it can be used", what is an example? A RDMBS cannot stop you from doing anything you want to with retrieved data. And, you can stick all kinds of meta data about an item to make retrieval simpler and/or faster. I agree there are very specialized niches where RDBMS performance cannot beat a custom-rolled database-like tool. But a custom-built tool for very specific and narrow usage patterns will almost always beat a general-purpose DB. RDBMS do better where any given data item may be used by multiple people for multiple different purposes that are hard to anticipate up-front. > Regards, > Dmitry A. Kazakov -T-
S Perryman <q@q.com> wrote in news:fqn0ir$mdg$1@aioe.org: > single objects. > > boolean f(Tuple t) { return (t.x = 123) ; } > > Set<Tuple> S ; > > Set<Tuple> t = S.match(f) ; // or match(S,f) if one prefers > > 1. How is the above not "set-oriented" ?? > > A set is given as input to a match operation which produces a > set as output. > > > 2. I have no idea whatsoever *how* S performs the match by > looking at the above. > 'Match' is cool, but what about more interesting operations like 'project(join(R1,R2)), R1.a1, R2.b3)' where R1 is a set of <c,a1,a2,a3> tuples and R2 is a set of <c, b1,b2,b3> tuples ? How do you express that in your fav OO language ?
On 2008-03-04 17:29:01 -0600, TroyK <cs_troyk@juno.com> said: > On Mar 3, 3:11�pm, Robert Martin <uncle...@objectmentor.com> wrote: >> On 2008-03-03 12:29:02 -0600, TroyK <cs_tr...@juno.com> said: >> >>> My experience is somewhere between 2 and 3 orders of magnitude >>> difference between implementing a business rules change in the db vs. >>> the programming team doing it in OO code. >> >> Then you should be able to fly rings around the programmers and get >> them all fired. �Why haven't you? > > If by "fly rings around the programmers" you mean having a fully > functioning reference implementation up and running in SQL within 2 > weeks that ends up taking a team of 3 programmers over 3 months to > implement in code, then, yeah, I guess I do. But the architecture > called for the programming to be done in a business layer implemented > in C# -- we expected and planned for that, so, happily, no one gets > fired. I'm sorry, but are you saying that you blithely allowed your organization to expend nine man months of fruitless labor? Or are you saying that there was no way your SQL "reference implementation" could have been used in production? -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-05 02:56:02 -0600, "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> said: > On Wed, 5 Mar 2008 00:46:14 -0600, Robert Martin wrote: > >> On 2008-03-03 16:49:29 -0600, "David Cressey" <cressey73@verizon.net> said: >> >>> But the idea of a single language that is suitable for everything remains an >>> elusive goal, and probably an unproductive endeavor. >> >> Agreed. > > Disagreed. > > The idea of multilingual system is the most damaging thing in software > developing history. Whooo! Then I guess it's back to toggling in binary for us all. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-05 10:16:30 -0600, topmind <topmind@technologist.com> said: > On Mar 4, 10:52 pm, Robert Martin <uncle...@objectmentor.com> wrote: >> On 2008-03-03 17:25:48 -0600, topmind <topm...@technologist.com> said: >> >>> But I think anybody inspecting both examples will clearly see that my >>> version is a lot less total code. >> >> C++ is a pretty wordy language. If I wrote it in Ruby I bet I'd beat >> you by a wide margin. > > I am a skeptical, but you are welcome to try. And, it would probably > be the meta features of Ruby that cut it down, not OOP. The "meta" features of Ruby *are* OO. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-05 06:48:31 -0600, JOG <jog@cs.nott.ac.uk> said: >>> 1) So why not treat all 'inheritance' in this way? >> >> Because all inheritance is not about inference. > > Hmmm. Then might you give an example of a situation where inheritance > cannot be described in terms of inference? Inheritance is simply the redeclaration of functions and variables in a subscope. That's not inferrence. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-05 09:48:45 -0600, Marshall <marshall.spight@gmail.com> said: > On Mar 4, 11:05 pm, Robert Martin <uncle...@objectmentor.com> wrote: >>> >>> Furthermore, since OOPLs lack physical independence, traversing >>> the graph may be quite expensive, particularly in the case where >>> the graph is backed by storage in a database, which is part of >>> why ORM is such a universally bad idea. >> >> No, you have this wrong. ORMs generally use standard SQL queries to >> traverse and gather data from the DB. Then that data is placed into OO >> structures so that the application can take advanage of the bias. > > Just the fact that they use SQL isn't sufficient. They have to > use it as well as a person could, though an interface that > is generally information-lossy enough (or at least, used in > a lossy way) that that's impossible. Yeah, assembly language programmers used to say the same thing about compilers. Then the compilers started writing more efficient code than the assembly language programmers could... In any case, good ORMs allow you to tune the SQL, so you *can* use it as well as a person could. > The most gratuitous example I can think of was some early > EJB containers I played with, back when I was still thinking > that ORM was something that could possibly be done well. > Against a table of a few hundred rows, one could execute > "delete from table". The comparable command through the > ORM issued SQL to load every row as an object, then > in a loop called obj.delete() which issued a single DELETE > statement for that row. It was ten thousand times slower, > and that's for only a couple hundred rows. Of course this > example is extreme, but it's still illustrative of a general > principle. It is illustrative of a programmer who either doesn't know his tool or didn't select a reasonable tool. > > I have *often* seen four and five order of magnitude > performance difference between straight SQL and > ORM SQL, across a wide variety of ORMs. The > very idea of ORM demands it: you have to try to > push a whole set-oriented language through a functional > interface. Bah. You don't *have* to do any such thing. I won't argue that there aren't programmers and teams who use their tools poorly. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-05 03:13:59 -0600, "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> said: > On Wed, 5 Mar 2008 01:10:01 -0600, Robert Martin wrote: > >> On 2008-03-03 18:34:39 -0600, Marshall <marshall.spight@gmail.com> said: >> >>> In SQL, if I have two relations with x and y int columns, I can >>> union them, or join on them, or whatever. There is no way, >>> in fact, to forbid such a thing, just like in Java there is no way >>> to allow such a thing. >> >> You are confusing OO with static typing. In OO languages like Ruby, >> Python, or Smalltalk you can pass any object to any function >> irrespective of type. > > Which is a bad idea. Bah. It is neither a bad nor a good idea. It is just an idea. There are advantages to dynamic typing and there are disadvantages. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-05 14:10:11 -0600, "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> said: > On Wed, 05 Mar 2008 09:57:23 -0500, Thomas Gagne wrote: > >> Dmitry A. Kazakov wrote: >>> On Wed, 5 Mar 2008 01:10:01 -0600, Robert Martin wrote: >>> >>>> <snip> >>>> You are confusing OO with static typing. In OO languages like Ruby, >>>> Python, or Smalltalk you can pass any object to any function >>>> irrespective of type. >>> >>> Which is a bad idea. >>> >> Why? > > Because it is in fact untyped. No, it's just not statically typed. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-05 03:24:37 -0600, S Perryman <q@q.com> said: > Robert Martin wrote: > >> On 2008-03-03 18:34:39 -0600, Marshall <marshall.spight@gmail.com> said: > >>> In SQL, if I have two relations with x and y int columns, I can >>> union them, or join on them, or whatever. There is no way, >>> in fact, to forbid such a thing, just like in Java there is no way >>> to allow such a thing. > >> You are confusing OO with static typing. In OO languages like Ruby, >> Python, or Smalltalk you can pass any object to any function >> irrespective of type. > > And you (both) are equating the (strong) typing model of Simula as the > only strong typing model. You presume too much. Type inferrence can be nearly as flexible as dynamic typing. > Go and look at Functional prog langs etc for > examples of how Marshalls' gripe would be done in a type-safe manner. Agreed. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-05 09:58:43 -0600, Marshall <marshall.spight@gmail.com> said: >> You are confusing OO with static typing. > > How supremely annoying to have gone to some lengths > to carefully use the most strictly defined, modern type > system terminology, only to have it labeled as a novice > error by someone who missed my point entirely. How awful for you. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On Mar 5, 10:48 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-05 09:58:43 -0600, Marshall <marshall.spi...@gmail.com> said: > > >> You are confusing OO with static typing. > > > How supremely annoying to have gone to some lengths > > to carefully use the most strictly defined, modern type > > system terminology, only to have it labeled as a novice > > error by someone who missed my point entirely. > > How awful for you. I accept you apology. Marshall
On Mar 5, 10:40 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-05 14:10:11 -0600, "Dmitry A. Kazakov" > > >>>> <snip> > >>>> You are confusing OO with static typing. In OO languages like Ruby, > >>>> Python, or Smalltalk you can pass any object to any function > >>>> irrespective of type. > > >>> Which is a bad idea. > > >> Why? > > > Because it is in fact untyped. > > No, it's just not statically typed. This issue is entirely tangential to the thread, but just FYI. There is a strict, formal definition of "type" under which languages like Python, Smalltalk, etc. are untyped. This sense of the term is often favored in type theory, and in fact is the one that introduced the word "type" to mathematics. In common parlance, however, untyped languages are typically called "dynamically typed" in cases where they employ a runtime tag system to classify values. I propose the thread is already sufficiently contentious without introducing further areas of controversy, such as static typing vs. whatever you care to call the other thing. Marshall
On Mar 5, 10:38 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-05 03:13:59 -0600, "Dmitry A. Kazakov" > <mail...@dmitry-kazakov.de> said: > > > On Wed, 5 Mar 2008 01:10:01 -0600, Robert Martin wrote: > > >> On 2008-03-03 18:34:39 -0600, Marshall <marshall.spi...@gmail.com> said: > > >>> In SQL, if I have two relations with x and y int columns, I can > >>> union them, or join on them, or whatever. There is no way, > >>> in fact, to forbid such a thing, just like in Java there is no way > >>> to allow such a thing. > > >> You are confusing OO with static typing. In OO languages like Ruby, > >> Python, or Smalltalk you can pass any object to any function > >> irrespective of type. > > > Which is a bad idea. > > Bah. It is neither a bad nor a good idea. It is just an idea. Yes. Specifically, it is a tradeoff. I still get to prefer one side over the other though. :-) But that doesn't mean it's not a tradeoff. Marshall
On Mar 5, 10:32 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-05 09:48:45 -0600, Marshall <marshall.spi...@gmail.com> said: > > On Mar 4, 11:05 pm, Robert Martin <uncle...@objectmentor.com> wrote: > > >>> Furthermore, since OOPLs lack physical independence, traversing > >>> the graph may be quite expensive, particularly in the case where > >>> the graph is backed by storage in a database, which is part of > >>> why ORM is such a universally bad idea. > > >> No, you have this wrong. ORMs generally use standard SQL queries to > >> traverse and gather data from the DB. Then that data is placed into OO > >> structures so that the application can take advanage of the bias. > > > Just the fact that they use SQL isn't sufficient. They have to > > use it as well as a person could, though an interface that > > is generally information-lossy enough (or at least, used in > > a lossy way) that that's impossible. > > Yeah, assembly language programmers used to say the same thing about > compilers. Then the compilers started writing more efficient code than > the assembly language programmers could... I give you high marks for rhetoric here. Excellently argued! You take the opposing side and compare them to assembly, and compare yourself with compiled languages. That the situation is most closely analogous to exactly the reverse is only relevant if one is interested in a deep understanding doesn't detract at all from the rhetorical effectiveness. As an actual engineering argument, though, this fails. Because it doesn't address the information-loss point I made. No code generator can write optimal code if it's missing information necessary to determine what is optimal. Object-graph traversal in ORMs is *necessarily* more expensive than straightforward SQL. In part exactly because it is *necessarily* missing the information present in the head of the programmer who writes instead a single SQL statement, information that is then embodied in that statement. > In any case, good ORMs allow you to tune the SQL, so you *can* use it > as well as a person could. > > > The most gratuitous example I can think of was some early > > EJB containers I played with, back when I was still thinking > > that ORM was something that could possibly be done well. > > Against a table of a few hundred rows, one could execute > > "delete from table". The comparable command through the > > ORM issued SQL to load every row as an object, then > > in a loop called obj.delete() which issued a single DELETE > > statement for that row. It was ten thousand times slower, > > and that's for only a couple hundred rows. Of course this > > example is extreme, but it's still illustrative of a general > > principle. > > It is illustrative of a programmer who either doesn't know his tool or > didn't select a reasonable tool. I agree that the ultimate problem here is the programmer not selecting a reasonable tool. In particular, a programmer who selects an ORM has not selected a reasonable tool. > > I have *often* seen four and five order of magnitude > > performance difference between straight SQL and > > ORM SQL, across a wide variety of ORMs. The > > very idea of ORM demands it: you have to try to > > push a whole set-oriented language through a functional > > interface. > > Bah. You don't *have* to do any such thing.I won't argue that there > aren't programmers and teams who use their tools poorly. Nice dodge. ORMs are a breeding ground for antipatterns. OO code written in the style you advocate (Employee.get("bob")) is a performance disaster first, and benefit-free busy work last. Marshall
Robert Martin wrote: > On 2008-03-05 03:24:37 -0600, S Perryman <q@q.com> said: >> Robert Martin wrote: >>> You are confusing OO with static typing. In OO languages like Ruby, >>> Python, or Smalltalk you can pass any object to any function >>> irrespective of type. >> And you (both) are equating the (strong) typing model of Simula as the >> only strong typing model. > You presume too much. > Type inferrence can be nearly as flexible as dynamic typing. 1. Type inferencing is *strongly-typed* . All the process does is construct the effective type on behalf of the programmer. The type system makes no distinctions between type-checking of an inferred or manifestly-declared type (static or otherwise) . 2. The only thing that *weakly-typed* type systems (ie those for which variables, input/output parameters etc have no evident type - which is actually what most mean by "dynamic typing" ) allow, which strongly-typed type systems do not, is the following : op(T) { T.op1() ; if(some-condition) { T.op2() ; } } Most strongly-typed systems will require T (or any type substitutable with T) to possess both the op1 and op2 properties in order for an invocation of op to be allowed (***) . A weakly-typed system does not care, because all usage of non-existent properties on a type results in runtime failure at the point of use. *** Of course, tis a simple matter for a strongly-typed system to infer the conditional usage of op2, and generate a "caveat emptor" warning for any usage of op where the type of the T instance used does not possess the op2 property. Regards, Steven Perryman
![]() |
0 |
![]() |
Yagotta B. Kidding wrote: > S Perryman <q@q.com> wrote in news:fqn0ir$mdg$1@aioe.org: >>boolean f(Tuple t) { return (t.x = 123) ; } >>Set<Tuple> S ; >>Set<Tuple> t = S.match(f) ; // or match(S,f) if one prefers >>1. How is the above not "set-oriented" ?? >>A set is given as input to a match operation which produces a >>set as output. >>2. I have no idea whatsoever *how* S performs the match by >> looking at the above. > 'Match' is cool, but what about more interesting operations like > 'project(join(R1,R2)), R1.a1, R2.b3)' where R1 is a set of <c,a1,a2,a3> > tuples and R2 is a set of <c, b1,b2,b3> tuples ? How do you express > that in your fav OO language ? As I have said on numerous occasions, the semantics of "joins" are an issue for OO (specifically the fact that in OO any of the "values" of c/a1..a3/b1..b3 could be a computational operation and not a data value etc) . Regards, Steven Perryman
![]() |
0 |
![]() |
On Thu, 6 Mar 2008 00:21:22 -0600, Robert Martin wrote: > On 2008-03-05 02:56:02 -0600, "Dmitry A. Kazakov" > <mailbox@dmitry-kazakov.de> said: > >> On Wed, 5 Mar 2008 00:46:14 -0600, Robert Martin wrote: >> >>> On 2008-03-03 16:49:29 -0600, "David Cressey" <cressey73@verizon.net> said: >>> >>>> But the idea of a single language that is suitable for everything remains an >>>> elusive goal, and probably an unproductive endeavor. >>> >>> Agreed. >> >> Disagreed. >> >> The idea of multilingual system is the most damaging thing in software >> developing history. > > Whooo! Then I guess it's back to toggling in binary for us all. That does not imply. If you concede that in your system it would be OK to use SQL together with an OOPL X, then your argument of hiding SQL behind the scenes does not work. Because alleged technical merits of SQL should in some way show themselves in the design. That is the DB-guys point. (They go further and propose to scrap X.) My position is opposite. It is that SQL does not have such merits, it is there only as an interface to a legacy component. If that component had an OOPL X interface I would take it instead. It might appear radical, but in fact this is what all those zillions of language X-to-DBMS-Y bindings are about. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Wed, 5 Mar 2008 14:27:58 -0800 (PST), topmind wrote: > Dmitry A. Kazakov wrote: >> On Wed, 5 Mar 2008 11:34:05 +0000, Eric wrote: >> >>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >> >>> I don't believe that merely using an RDBMS will solve all problems. What >>> I meant was that, accepting what David said above, if you keep your data >>> in an RDBMS, it will be easily available for the solution of any >>> possible problem that can be solved using that data. >> >> No, this as well is wrong. Keeping "data" in RDBMS puts certain >> restrictions on what can be stored there and how it can be used later. [...] > A RDMBS > cannot stop you from doing anything you want to with retrieved data. Yes, exactly this is wrong. (I hope you don't have in mind retrieving all content and continuing without RDBMS.) -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On 3 Mar, 23:37, Bob Badour <bbad...@pei.sympatico.ca> wrote: > David Cressey wrote: > > "Bob Badour" <bbad...@pei.sympatico.ca> wrote in message > >news:47cc383f$0$4041$9a566e8b@news.aliant.net... > >>It's pretty obvious to me: object-relational mismatch is to relations as= > >>assembler-object mismatch is to objects. > > > I didn't get this comment. =A0Now that someone else has flagged it as a > > keeper, I feel the need to ask for an explanation. > > What do you know about assembler? - it maps (almost) 1 to 1 to machine code - the primitive operations are very primitive - the facilities for combining primitives are crude - hence doing anything non-trivial takes lots of primitives - not portable between architectures are any of those relevent to your statement? so... just as assmbler "mismatches" objects so objects "mismatch" relations so you are saying objects are too low a level of abstraction? -- Nick Keighley
On Mar 6, 6:26 am, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-05 06:48:31 -0600, JOG <j...@cs.nott.ac.uk> said: > > >>> 1) So why not treat all 'inheritance' in this way? > > >> Because all inheritance is not about inference. > > > Hmmm. Then might you give an example of a situation where inheritance > > cannot be described in terms of inference? > > Inheritance is simply the redeclaration of functions and variables in a > subscope. Using nonsense words like 'subscope' doesn't do conversation any favours Robert. Either way, lets consider the two concepts you mention. First "variable" inheritance: A person has a name (string) A man is a person |= A man has a name (string) Okay, thats seems straightforward. So now functions: A lock_view has an update() A digital_clock_view is a clock_view |= a digital_clock_view has an update() Great, that makes sense too. In fact more than that, it seems entirely straightforward. > That's not inferrence. Well that all jolly-well looks like inference to me. I must have missed something.<scratches_head/> I don't know, maybe you've gotten so involved with OO that you can't see the wood from the trees in terms of abstracting away from the mechanism? > > -- > Robert C. Martin (Uncle Bob) | email: uncle...@objectmentor.com > Object Mentor Inc. | blog: www.butunclebob.com > The Agile Transition Experts | web: www.objectmentor.com > 800-338-6716 |
On Mar 6, 12:09 pm, Nick Keighley <nick_keighley_nos...@hotmail.com> wrote: > On 3 Mar, 23:37, Bob Badour <bbad...@pei.sympatico.ca> wrote: > > > David Cressey wrote: > > > "Bob Badour" <bbad...@pei.sympatico.ca> wrote in message > > >news:47cc383f$0$4041$9a566e8b@news.aliant.net... > > >>It's pretty obvious to me: object-relational mismatch is to relations as > > >>assembler-object mismatch is to objects. > > > > I didn't get this comment. Now that someone else has flagged it as a > > > keeper, I feel the need to ask for an explanation. > > > What do you know about assembler? > > - it maps (almost) 1 to 1 to machine code What is mapping 1:1 between machine code and assembler. Tis the first time I hear somebody establishing cardinality between 2 languages. What a bunch of crap. > - the primitive operations are very primitive Uh... I was afraid that primitive operations would be secondary > - the facilities for combining primitives are crude ? nonsense > - hence doing anything non-trivial takes lots of primitives What does that mean? > - not portable between architectures > > are any of those relevent to your statement? What is relevant is the fact that you have only no clue how assembler works... > so... > just as assmbler "mismatches" objects > so objects "mismatch" relations What the hell does that mean... Object/assembler is a sloppy concept compared to a language Object/Relation is a sloppy concept compared to a mathematical construct And you say both are equal...According to what? > so you are saying objects are too low a level of abstraction? *Objects* are just vague concept for sloppy thinkers to put their ignorant teeth onto... > Nick Keighley
On 2008-03-05, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Wed, 5 Mar 2008 12:22:24 +0000, Eric wrote: > >> If you should not use multiple languages, there must be a universal >> language. > > Yep, it is called a universal/general purpose programming language. > >> What is it, is it really universal _right now_, and if not, > > It is, there exist many of them. So any general purpose language will do, as long as I use only that? Do I have to use the same one for the next task? > > The mistake you make is in a wrong presumption that "universal purpose" <=> > "best possible." > I presume no such thing. Any program can be written in any language, but particular combinations may be unwise. Why should I not choose the most appropriate language of those that are currently available to me? If I bring in an existing API to do some part of the task, why should I care what it is written in as long as I can call it? If it would help me to create an additional abstraction layer (i.e. my own API), why should I not use the most appropriate tool for it? What language do you use, and what is your math library written in? >> when will it be and what should we do in the meantime? > > Not to develop pet domain-specific languages if the advantages of those are > unclear. If you, say, wanted to create a declarative language based on > inference, then that should be a universal purpose one. > An API, or an abstraction layer, _is_ a domain-specific language. If it can be made easier to use by bolting a parser on to the front of it, why not do that? E
"Bob Badour" <bbadour@pei.sympatico.ca> wrote in message news:47cc8bc8$0$4038$9a566e8b@news.aliant.net... > David Cressey wrote: > > > "Bob Badour" <bbadour@pei.sympatico.ca> wrote in message > > news:47cc383f$0$4041$9a566e8b@news.aliant.net... > > > >>It's pretty obvious to me: object-relational mismatch is to relations as > >>assembler-object mismatch is to objects. > > > > I didn't get this comment. Now that someone else has flagged it as a > > keeper, I feel the need to ask for an explanation. > > What do you know about assembler? I used to work in assembler, a long, long time ago. Perhaps the most ambitious project I wrote in assembler was an automatic garbage collector for MDL, a programming language in the Lisp family. That was back in 1971. I haven't done anything significant in assembler since 1978.
On 2008-03-05, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Wed, 5 Mar 2008 11:34:05 +0000, Eric wrote: > >> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > >> I don't believe that merely using an RDBMS will solve all problems. What >> I meant was that, accepting what David said above, if you keep your data >> in an RDBMS, it will be easily available for the solution of any >> possible problem that can be solved using that data. > > No, this as well is wrong. Keeping "data" in RDBMS puts certain > restrictions on what can be stored there and how it can be used later. No it doesn't, in either case. > > You propose a solution (DB on a RDBMS). What was the problem? From your > description the problem looks like a persistence layer. This what c.d.t. > people vehemently disagree. Isn't it interesting to see how quickly a great > theory of all degrades to storage? A persistence layer is "my program has X, I need put put it away somewhere and get it back later". As long as what you get back is what you put, it's OK - no need to worry about the internal details of X or what it might be related to. An RDBMS is not a persistence layer. No-one from c.d.t has said that it can not be used as a persistence layer. What is often said is that using an RDBMS _merely_ as a persistence layer is a grossly misguided use of machine and human resources. The relational model is not a great theory of all, it is a logically-based recipe for the long-term management of data. You are the sort of person who degrades it by arguing from a position of ignorance of what it is really about. > >>> I guess that his hidden argument was "because RDB >>> apparently solves all problems, then data-centric view is the right one." >> >> No, the argument is that the data-centric view is often (not necessarily >> always) the right one to take, and that keeping the data in an RDBMS >> provides a better basis for solving future problems using the data than >> any other way of keeping it. > > Ah, that's better. So data-centric view is not universal => then there > exist cases which cannot be viewed that way => data is not a fundamental > term => data do not unconditionally exist. Are we in agreement? > No we are not. "cannot be viewed" should be "are not best viewed", and the rest of your implications are nonsense. >>> Yet another logical fallacy was a data-centric problem statement: "there >>> will never be any other application that will need my data," used in order >>> to prove data-centric view itself. >> >> There is no fallacy here. First there is a fact - you have data, however >> you choose to view it. Do you want to deny this? > > Certainly. Non-observable things do not exist. But the fallacy is that you > are putting existence of data as a fact from which you deduce data > existence. I have not deduced data existence from anything, I simply believe it to be a fact. > >>> In OO problems are not modeled in terms >>> of applications using data. >> >> No, but the data is still there. > > In which sense? Laplace has answered this two hundred years ago: "I did not > need to make such an assumption." > Your quote has insufficient context for me to have any idea what you think it means, or what Laplace thought it meant. What is your definition of "data"? E
On 2008-03-06, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Wed, 5 Mar 2008 14:27:58 -0800 (PST), topmind wrote: > >> Dmitry A. Kazakov wrote: >>> On Wed, 5 Mar 2008 11:34:05 +0000, Eric wrote: >>> >>>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>> >>>> I don't believe that merely using an RDBMS will solve all problems. What >>>> I meant was that, accepting what David said above, if you keep your data >>>> in an RDBMS, it will be easily available for the solution of any >>>> possible problem that can be solved using that data. >>> >>> No, this as well is wrong. Keeping "data" in RDBMS puts certain >>> restrictions on what can be stored there and how it can be used later. > > [...] >> A RDMBS >> cannot stop you from doing anything you want to with retrieved data. > > Yes, exactly this is wrong. (I hope you don't have in mind retrieving all > content and continuing without RDBMS.) > This is how any database-using program works - it retrieves the _relevant_ data and does what it needs to do with it. This is not "continuing without RDBMS". But then you are the person who thinks that a single tool should be able to do everything. Here is a hammer, please take that screw out. E
S Perryman <q@q.net> wrote in news:fqoenv$fo4$1@news.datemas.de: > Yagotta B. Kidding wrote: > >> S Perryman <q@q.com> wrote in news:fqn0ir$mdg$1@aioe.org: > >>>boolean f(Tuple t) { return (t.x = 123) ; } > >>>Set<Tuple> S ; > >>>Set<Tuple> t = S.match(f) ; // or match(S,f) if one prefers > >>>1. How is the above not "set-oriented" ?? > >>>A set is given as input to a match operation which produces a >>>set as output. > >>>2. I have no idea whatsoever *how* S performs the match by >>> looking at the above. > >> 'Match' is cool, but what about more interesting operations like >> 'project(join(R1,R2)), R1.a1, R2.b3)' where R1 is a set of >> <c,a1,a2,a3> tuples and R2 is a set of <c, b1,b2,b3> tuples ? How >> do you express that in your fav OO language ? > > As I have said on numerous occasions, the semantics of "joins" are an > issue for OO (specifically the fact that in OO any of the "values" of > c/a1..a3/b1..b3 could be a computational operation and not a data > value etc) . Does "the semantics of "joins" are an issue for OO" mean that relational joins cannot be implemented in principle in an object-oriented way ? > > > Regards, > Steven Perryman >
On 2008-03-05, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Wed, 5 Mar 2008 12:12:47 +0000, Eric wrote: > >> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>> On Tue, 4 Mar 2008 20:26:01 +0000, Eric wrote: >>> >>>> That's what I said - logically possible criteria. >>> >>> What about things which cannot be spelt in SQL? >> >> Use the query language (not necessarily SQL) to pass the data to >> something that can deal with it. > > This does not equate to "logically possible criteria." It is "something > that can understand this." My statement had two parts, they are about different things both interacting with the same data. Why are you trying to claim that I said the two parts are the same, just so you can say they are different? > >>> What about response times? >>> Can you specify/guess an upper bound for all requests? For a certain subset >>> of? >> >> This is just a prejudice - get the right RDBMS and the right expert to >> tune it (specifically for what it must do, not generically) and you >> might be surprised. > > This is not a prejudice, this is the point. I don't want to be surprised. > Engineering is about predictability. The surprise is you discovering something you did not know, it is not part of the engineering of the problem or the solution. > >>>> Relative to what? Where are the tests? Do you install an RDBMS product >>>> and just go with whatever myths you have heard lately, or do you get a >>>> product specialist to sort it out? >>> >>> Come on, show me the nearest neighbour search in ten-dimensional space >>> implemented in RDBMS. What would be the complexity of? You should clearly >>> understand that it is possible to break the neck of *any* indexing method. >>> This refutes the argument to "any logically possible criterion.". >> >> But I don't believe that an RDBMS can do everything. I would not be >> surprised to find the data defining the ten-dimensional space in an >> RDBMS, but I would never expect to do such a calculation in the query >> language. > > From this I conclude incompleteness of the approach. There is nothing bad > in that. At this point you should have said, "OK, how could we reason about > the applicability of this approach (and other approaches)." A framework > where that could be done is that common ground where c.d.t. and c.o. would > unite. > No-one but you ever said that the approach was complete, and you seem to have said that only to argue against it. >>>> I at least have no problem with using OO programming >>>> for that. Also, that explains your short-term view. But what do you do >>>> with the data (presumably transformed) that does get kept for longer? >>>> Put it somewhere that will be available for a variety of expected and >>>> unexpected uses? But we were here before! >>> >>> Yes, here we go again. Data are meaningless if usage is unexpected. Nobody >>> can use a CD-ROM in a wind-up phonograph, deaf people notably. >> >> No collection of data is useful to everybody, so deaf people have got >> nothing to do with it. > > Theorem: for any collection of data, there exist somebody who does not need > it. > That's what I said. I only said it to point out its irrelevance. >> Other than that, you have just demonstrated a lack of understanding of >> the difference between the logical and the physical. It is possible for >> me to collect the necessary bits of technology to transfer a piece of >> music from a CD-ROM to a disc or cylinder for the wind-up phonograph. It >> is still the same piece of music. > > No. Any logical is physical on another abstraction level. Ants do not > listen to music. Information does not exist without a receiver. > False, unknown, false. I might admit that information is not useful without the possibility of a receiver, but I don't see that it gets us anywhere. And I thought we were talking about data, not information. >>> The system does not keep anything it exists and behaves. >> >> But it has inputs and outputs. There may also be a need for it to record >> some of its behaviour. There may be a reason to keep some of the >> outputs, or even the inputs (for later extended analysis?). If you have >> a system that genuinely keeps nothing, I have no argument with how you >> choose to create it, as long as it works. > > No you get me wrong. Of course we do logging, in fact a lot of. The system > behaves as if it "kept" things. Whether it physically keeps them is an > implementation detail. Consider sine. Does it keep its output? It is a > meaningless question. You can tabulate sine and keep the table, you can use > Chebyshev's polynomials, or you can ask an oracle. It is no matter. Sine is > a behavior of real numbers. > And you never analyse those logs? Anyway, you could, so they are data. Sine is not a reasonable example, nor is it a behaviour. It is a mathematical function, a mapping. How can a mapping keep anything? But a computer program can keep the number that it passes in to an implementation of sine, and the number that it gets back. They are data. E
On 2008-03-06, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > On Thu, 6 Mar 2008 00:21:22 -0600, Robert Martin wrote: > >> On 2008-03-05 02:56:02 -0600, "Dmitry A. Kazakov" >> <mailbox@dmitry-kazakov.de> said: >> >>> On Wed, 5 Mar 2008 00:46:14 -0600, Robert Martin wrote: >>> >>>> On 2008-03-03 16:49:29 -0600, "David Cressey" <cressey73@verizon.net> said: >>>> >>>>> But the idea of a single language that is suitable for everything remains an >>>>> elusive goal, and probably an unproductive endeavor. >>>> >>>> Agreed. >>> >>> Disagreed. >>> >>> The idea of multilingual system is the most damaging thing in software >>> developing history. >> >> Whooo! Then I guess it's back to toggling in binary for us all. > > That does not imply. > > If you concede that in your system it would be OK to use SQL together with > an OOPL X, then your argument of hiding SQL behind the scenes does not > work. Because alleged technical merits of SQL should in some way show > themselves in the design. That is the DB-guys point. (They go further and > propose to scrap X.) Where do you get this stuff? None of the "DB-guys" said any such things. > > My position is opposite. It is that SQL does not have such merits, it is > there only as an interface to a legacy component. If that component had an > OOPL X interface I would take it instead. It might appear radical, but in > fact this is what all those zillions of language X-to-DBMS-Y bindings are > about. > I give up, you do not seem to make any effort to understand what others are saying, and you argue, not against what is said, but what you think is said - which you always get wrong! E
Yagotta B. Kidding wrote: > S Perryman <q@q.net> wrote in news:fqoenv$fo4$1@news.datemas.de: >>Yagotta B. Kidding wrote: >>>'Match' is cool, but what about more interesting operations like >>>'project(join(R1,R2)), R1.a1, R2.b3)' where R1 is a set of >>><c,a1,a2,a3> tuples and R2 is a set of <c, b1,b2,b3> tuples ? How >>>do you express that in your fav OO language ? >>As I have said on numerous occasions, the semantics of "joins" are an >>issue for OO (specifically the fact that in OO any of the "values" of >>c/a1..a3/b1..b3 could be a computational operation and not a data >>value etc) . > Does "the semantics of "joins" are an issue for OO" mean that relational > joins cannot be implemented in principle in an object-oriented way ? IMHO most things can be implemented in principle. Efficiency etc is another (important) matter. Thinking out loud ... My unease about joins is that in the ADT/OO sense, for your example etc we effectively create instances of a new type whose properties actually belong to multiple (existing) object instances. Properties that may be realised as computational operations. I feel that prevailing impls of OO are not up to the job of doing the above (although having a consistent Relational infrastructure that apart from joins - works is and is programmed the same way for both ADT-based and "data value only" types/tuples - is something I have long wanted) . I similarly feel that Functional programming may have the answers to the problems (the execution engines in particular) . But thinking out loud and "feeling" is hardly knowing (one way or the other) is it. :-( :-) Regards, Steven Perryman
![]() |
0 |
![]() |
S Perryman wrote: > Yagotta B. Kidding wrote: > >> S Perryman <q@q.com> wrote in news:fqn0ir$mdg$1@aioe.org: > > >>> boolean f(Tuple t) { return (t.x = 123) ; } > > >>> Set<Tuple> S ; > > >>> Set<Tuple> t = S.match(f) ; // or match(S,f) if one prefers > > >>> 1. How is the above not "set-oriented" ?? > > >>> A set is given as input to a match operation which produces a >>> set as output. > > >>> 2. I have no idea whatsoever *how* S performs the match by >>> looking at the above. > > >> 'Match' is cool, but what about more interesting operations like >> 'project(join(R1,R2)), R1.a1, R2.b3)' where R1 is a set of >> <c,a1,a2,a3> tuples and R2 is a set of <c, b1,b2,b3> tuples ? How do >> you express that in your fav OO language ? > > > As I have said on numerous occasions, the semantics of "joins" are an > issue for OO (specifically the fact that in OO any of the "values" of > c/a1..a3/b1..b3 could be a computational operation and not a data value > etc) . How is that any different than a relational view? Whether the data is calculated or stored directly is irrelevant.
S Perryman <q@q.com> wrote in news:fqp6ip$bje$1@aioe.org: > Yagotta B. Kidding wrote: >> Does "the semantics of "joins" are an issue for OO" mean that >> relational joins cannot be implemented in principle in an >> object-oriented way ? > > IMHO most things can be implemented in principle. > Efficiency etc is another (important) matter. > > Thinking out loud ... > > My unease about joins is that in the ADT/OO sense, for your example > etc we effectively create instances of a new type whose properties > actually belong to multiple (existing) object instances. Properties > that may be realised as computational operations. > > I feel that prevailing impls of OO are not up to the job of doing the > above (although having a consistent Relational infrastructure that > apart from joins - works is and is programmed the same way for both > ADT-based and "data value only" types/tuples - is something I have > long wanted) . That was refreshingly honest ! Thank you. However, without joins what is left of the RM ? I think that even projections are not very palatable to the OO model. If so, that leaves us with something not very useful for data management at all. > > I similarly feel that Functional programming may have the answers to > the problems (the execution engines in particular) . There are some ( http://www.csd.abdn.ac.uk/ ~pgray/graduate_course2004/monad.pdf ) but they have a bunch of their own issues, mainly related to join performance (not surprisingly). But, I agree, FP way may still bear some relational fruits as opposed to the barren OOP tree ;) > > But thinking out loud and "feeling" is hardly knowing (one way or the > other) is it. :-( :-) > > > Regards, > Steven Perryman >
Nick Keighley wrote: > On 3 Mar, 23:37, Bob Badour <bbad...@pei.sympatico.ca> wrote: > >>David Cressey wrote: >> >>>"Bob Badour" <bbad...@pei.sympatico.ca> wrote in message >>>news:47cc383f$0$4041$9a566e8b@news.aliant.net... > > >>>>It's pretty obvious to me: object-relational mismatch is to relations as >>>>assembler-object mismatch is to objects. >> >>>I didn't get this comment. Now that someone else has flagged it as a >>>keeper, I feel the need to ask for an explanation. >> >>What do you know about assembler? > > > - it maps (almost) 1 to 1 to machine code > - the primitive operations are very primitive > - the facilities for combining primitives are crude > - hence doing anything non-trivial takes lots of primitives > - not portable between architectures > > are any of those relevent to your statement? > > so... > just as assmbler "mismatches" objects > so objects "mismatch" relations > > so you are saying objects are too low a level of abstraction? Indeed.
David Cressey wrote: > "Bob Badour" <bbadour@pei.sympatico.ca> wrote in message > news:47cc8bc8$0$4038$9a566e8b@news.aliant.net... > >>David Cressey wrote: >> >>>"Bob Badour" <bbadour@pei.sympatico.ca> wrote in message >>>news:47cc383f$0$4041$9a566e8b@news.aliant.net... >>> >>>>It's pretty obvious to me: object-relational mismatch is to relations as >>>>assembler-object mismatch is to objects. >>> >>>I didn't get this comment. Now that someone else has flagged it as a >>>keeper, I feel the need to ask for an explanation. >> >>What do you know about assembler? > > I used to work in assembler, a long, long time ago. Perhaps the most > ambitious project I wrote in assembler was an automatic garbage collector > for MDL, a programming language in the Lisp family. That was back in 1971. > I haven't done anything significant in assembler since 1978. What part of the analogy leaves you confused, then?
Marshall wrote: > On Mar 5, 12:19 pm, rugby fan <q...@q.com> wrote: SP>But you cannot claim that all OO prog langs will have an implementation SP>that forces the fallacious "one at a time" scheme M>Can you name one that doesn't? (I think I agree that M>one could exist in theory, however.) >>CLOS, OCaml etc (by virtue of being built on Functional prog langs) . > Well, hrmm. Let's see. While those are certainly *interesting* > and *surprising* examples (in this context) I don't know that > they qualify as counterexamples to my point. > Take CLOS. It is an object system built on LISP. As such, is > it fair to consider it in terms of the primitives or "axioms" that > LISP is built on? [ stuff snipped - but read and acknowledged - it helped to clarify the point you have wanted to put across. ] > Many good things can be said of this pattern, but it is not set-oriented. > I am not familiar enough with OCaml to say much. But > if we were talking SML, (a language I have the utmost > respect for) I would still not consider it set-oriented. > map(), filter(), fold(), while providing collection-oriented > interfaces, are typically not primitive in the language. OK. You regard "X-oriented" as something for which the facilities to support X are provided at a fundamental level (like arithmetic ops, CAR/CDR in Lisp etc) and not built (by whoever) from the fundamental constructs of a prog lang. Fair enough. M>Why are you bringing up functional programming when M>talking about OOPLs? >>Because FP has : >>- been able to support OO quite easily >>- has an interesting execution infrastructure that IMHO makes it a >> good candidate for supporting the Relational paradigm > Hrmmm, well, I have to admit you make a good argument here. > (Although you must grant me that, we people speak of OOPLs, > they are not usually thinking CLOS!) I would not call it an "argument" as such. More thinking out loud about candidate technologies for resolving the impedence problem (conceptually and performance-wise) . > But I am, rather, speaking of what languages' axiomatic natures are. > I could, in fact, write a Relation class in Java, and provide set- > oriented > methods for it, and a whole host of relational goodness. But that > wouldn't make Java a relational language per se. > Virtually no languages have primitive support for anything like a > collection. SQL and SETL and a few others; that's it. There are > some languages that were designed from the start with *list* > processing in mind: lisp (and I should probably also mention the > APL family here.) There are some *very* interesting things > in there, but not things I would say could be strictly described > as set-oriented. I take a slightly different view in that I obviously have the need for various collection types, and support for a Relational "calculus" to use on those collections. I also need support for collections of ADTs in particular. If possible I would like the prog lang user to be able to construct specific collections such as sets etc, and for the prog lang env to be within reasonable performance of something that is designed for the support of some one specific aspect (as Lisp is for lists etc) . I think that FP is the paradigm that could possibly do this. Regards, Steven Perryman
![]() |
0 |
![]() |
Bob Badour wrote: > S Perryman wrote: >> Yagotta B. Kidding wrote: >>> 'Match' is cool, but what about more interesting operations like >>> 'project(join(R1,R2)), R1.a1, R2.b3)' where R1 is a set of >>> <c,a1,a2,a3> tuples and R2 is a set of <c, b1,b2,b3> tuples ? How >>> do you express that in your fav OO language ? >> As I have said on numerous occasions, the semantics of "joins" are an >> issue for OO (specifically the fact that in OO any of the "values" of >> c/a1..a3/b1..b3 could be a computational operation and not a data value >> etc) . > How is that any different than a relational view? Whether the data is > calculated or stored directly is irrelevant. I know this should be true, but something in my head keeps screaming "its' not that simple !!!" . :-( :-) I feel that thoughts of specific impls of both RDBs and OO prog langs are perhaps preventing me from stepping back and seeing things cleanly. But I am interested in exploring the notion of an 'ADT join' and how it might be implemented by an OO prog lang. Regards, Steven Perryman
![]() |
0 |
![]() |
On Thu, 6 Mar 2008 14:58:35 +0000, Eric wrote: > I have not deduced data existence from anything, I simply believe it to > be a fact. I see, it is a theological issue then. You are free too believe in what you want. That is out of my interest. >>>> In OO problems are not modeled in terms >>>> of applications using data. >>> >>> No, but the data is still there. >> >> In which sense? Laplace has answered this two hundred years ago: "I did not >> need to make such an assumption." > > Your quote has insufficient context for me to have any idea what you > think it means, or what Laplace thought it meant. He meant beliefs. > What is your definition of "data"? I don't need it defined. But if you ask it is _value_ Specifically, one of some *built-in* type. A low-level abstraction concept typical to early universal purpose programming languages without ADTs, like FORTRAN. Data are aggregated mechanically without further abstraction. Contemporary use examples are data-oriented domain-specific 4GL, like SQL (data query), Simulink (signal processing). Weak point is presumption of equality of the domain entities and the computational objects and thus 1) built-in types, 2) deduced algorithms from a narrow set (indexing in RDBMS, equation solving in Simulink). This is also the criterion of applicability of a given data-oriented language. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Thu, 6 Mar 2008 15:04:16 +0000, Eric wrote: > On 2008-03-06, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >> On Wed, 5 Mar 2008 14:27:58 -0800 (PST), topmind wrote: >> >>> Dmitry A. Kazakov wrote: >>>> On Wed, 5 Mar 2008 11:34:05 +0000, Eric wrote: >>>> >>>>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>>> >>>>> I don't believe that merely using an RDBMS will solve all problems. What >>>>> I meant was that, accepting what David said above, if you keep your data >>>>> in an RDBMS, it will be easily available for the solution of any >>>>> possible problem that can be solved using that data. >>>> >>>> No, this as well is wrong. Keeping "data" in RDBMS puts certain >>>> restrictions on what can be stored there and how it can be used later. >> >> [...] >>> A RDMBS >>> cannot stop you from doing anything you want to with retrieved data. >> >> Yes, exactly this is wrong. (I hope you don't have in mind retrieving all >> content and continuing without RDBMS.) > > This is how any database-using program works - it retrieves the _relevant_ > data and does what it needs to do with it. I wrote about "all" in order to exclude an argument to completeness. If it is not all, then "anything" does not apply. (Simple example: a stream can be accessed randomly only if all read.) I.e. the argument is wrong. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
Yagotta B. Kidding wrote: > S Perryman <q@q.com> wrote in news:fqp6ip$bje$1@aioe.org: >>Yagotta B. Kidding wrote: YK>Does "the semantics of "joins" are an issue for OO" mean that YK>relational joins cannot be implemented in principle in an YK>object-oriented way ? YK>IMHO most things can be implemented in principle. YK>Efficiency etc is another (important) matter. >>Thinking out loud ... >>My unease about joins is that in the ADT/OO sense, for your example >>etc we effectively create instances of a new type whose properties >>actually belong to multiple (existing) object instances. Properties >>that may be realised as computational operations. >>I feel that prevailing impls of OO are not up to the job of doing the >>above (although having a consistent Relational infrastructure that >>apart from joins - works is and is programmed the same way for both >>ADT-based and "data value only" types/tuples - is something I have >>long wanted) . > That was refreshingly honest ! Thank you. Which is in the main, only opinions. Nothing objectively provable etc. > However, without joins what is left of the RM ? The ability to construct arbitrary set/predicate-based expressions to access entities in some given info base is still very valuable, is it not. > I think that even projections are not very palatable to the OO model. Projection is covered by type substitutability at the basic level. It is only a problem for specific substitutability models (the Simula model being a case in point) . >>I similarly feel that Functional programming may have the answers to >>the problems (the execution engines in particular) . > There are some ( http://www.csd.abdn.ac.uk/ > ~pgray/graduate_course2004/monad.pdf ) but they have a bunch of their own > issues, mainly related to join performance (not surprisingly). But, I > agree, FP way may still bear some relational fruits as opposed to the > barren OOP tree ;) From my undergrad times, the things about FP execution envs that impressed me was the organisation of programs as directed graphs, sharing the same program graphs, lazy evaluation etc. A lot of this looks to me like it may provide some of the performance gains needed for a 'Relational OO' system to perform comparably to your typical SQL engine etc. But again, merely opinion. Regards, Steven Perryman
![]() |
0 |
![]() |
On Thu, 6 Mar 2008 14:30:56 +0000, Eric wrote: > On 2008-03-05, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >> On Wed, 5 Mar 2008 12:22:24 +0000, Eric wrote: >> >>> If you should not use multiple languages, there must be a universal >>> language. >> >> Yep, it is called a universal/general purpose programming language. >> >>> What is it, is it really universal _right now_, and if not, >> >> It is, there exist many of them. > > So any general purpose language will do, as long as I use only that? Do > I have to use the same one for the next task? Yes, but purpose is not the only criterion. For that matter, VB is a general purpose language. >> The mistake you make is in a wrong presumption that "universal purpose" <=> >> "best possible." > > I presume no such thing. > > Any program can be written in any language, but particular combinations > may be unwise. Why should I not choose the most appropriate language of > those that are currently available to me? Because it is bad engineering. Any possible local advantage will be consumed by multilingual impedance. > If I bring in an existing API to do some part of the task, why should I > care what it is written in as long as I can call it? If it would help me > to create an additional abstraction layer (i.e. my own API), why should > I not use the most appropriate tool for it? Because it is software design / performance / maintenance nightmare. > What language do you use, > and what is your math library written in? This is a poor example. Firstly math libraries of elementary functions are more or less well defined, deeply studied, solid mathematical and CS background. The interfaces and implementations are stable. It is not my system, I don't program them. Secondly, certainly I would like them rewritten in a language better than K&R C / FORTRAN IV. These languages have undefined numeric behavior in numerous cases. I prefer exceptions and interval computation based version. I would like flexible precision control, as well as fixed-point versions and dimensioned versions. Neither could be delivered in C/FORTRAN. >>> when will it be and what should we do in the meantime? >> >> Not to develop pet domain-specific languages if the advantages of those are >> unclear. If you, say, wanted to create a declarative language based on >> inference, then that should be a universal purpose one. > > An API, or an abstraction layer, _is_ a domain-specific language. If it > can be made easier to use by bolting a parser on to the front of it, why > not do that? Because handicraft is not engineering. Statistically this does not pay off. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Thu, 6 Mar 2008 15:20:48 +0000, Eric wrote: > On 2008-03-05, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >> On Wed, 5 Mar 2008 12:12:47 +0000, Eric wrote: >> >>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: >>>> On Tue, 4 Mar 2008 20:26:01 +0000, Eric wrote: >>>> >>>>> That's what I said - logically possible criteria. >>>> >>>> What about things which cannot be spelt in SQL? >>> >>> Use the query language (not necessarily SQL) to pass the data to >>> something that can deal with it. >> >> This does not equate to "logically possible criteria." It is "something >> that can understand this." > > My statement had two parts, they are about different things both interacting > with the same data. Why are you trying to claim that I said the two parts > are the same, just so you can say they are different? I didn't. I just asked for a specification of "any logically possible criterion" to give you enough rope to hang yourself. >>>> What about response times? >>>> Can you specify/guess an upper bound for all requests? For a certain subset >>>> of? >>> >>> This is just a prejudice - get the right RDBMS and the right expert to >>> tune it (specifically for what it must do, not generically) and you >>> might be surprised. >> >> This is not a prejudice, this is the point. I don't want to be surprised. >> Engineering is about predictability. > > The surprise is you discovering something you did not know, it is not > part of the engineering of the problem or the solution. Discovering new things is called research. If I were at a university I would enjoy surprises, but then RDBMS would draw my attention even lesser than now. >>>>> Relative to what? Where are the tests? Do you install an RDBMS product >>>>> and just go with whatever myths you have heard lately, or do you get a >>>>> product specialist to sort it out? >>>> >>>> Come on, show me the nearest neighbour search in ten-dimensional space >>>> implemented in RDBMS. What would be the complexity of? You should clearly >>>> understand that it is possible to break the neck of *any* indexing method. >>>> This refutes the argument to "any logically possible criterion.". >>> >>> But I don't believe that an RDBMS can do everything. I would not be >>> surprised to find the data defining the ten-dimensional space in an >>> RDBMS, but I would never expect to do such a calculation in the query >>> language. >> >> From this I conclude incompleteness of the approach. There is nothing bad >> in that. At this point you should have said, "OK, how could we reason about >> the applicability of this approach (and other approaches)." A framework >> where that could be done is that common ground where c.d.t. and c.o. would >> unite. > > No-one but you ever said that the approach was complete, and you seem to > have said that only to argue against it. You already said that. But you didn't confirmed understanding the implications I wrote about. >>> Other than that, you have just demonstrated a lack of understanding of >>> the difference between the logical and the physical. It is possible for >>> me to collect the necessary bits of technology to transfer a piece of >>> music from a CD-ROM to a disc or cylinder for the wind-up phonograph. It >>> is still the same piece of music. >> >> No. Any logical is physical on another abstraction level. Ants do not >> listen to music. Information does not exist without a receiver. > > False, unknown, false. I might admit that information is not useful > without the possibility of a receiver, but I don't see that it gets us > anywhere. And I thought we were talking about data, not information. In your other post you said that data is a belief. So information must be another one, even more fuzzy and thus requiring a greater portion of mysticism... >>>> The system does not keep anything it exists and behaves. >>> >>> But it has inputs and outputs. There may also be a need for it to record >>> some of its behaviour. There may be a reason to keep some of the >>> outputs, or even the inputs (for later extended analysis?). If you have >>> a system that genuinely keeps nothing, I have no argument with how you >>> choose to create it, as long as it works. >> >> No you get me wrong. Of course we do logging, in fact a lot of. The system >> behaves as if it "kept" things. Whether it physically keeps them is an >> implementation detail. Consider sine. Does it keep its output? It is a >> meaningless question. You can tabulate sine and keep the table, you can use >> Chebyshev's polynomials, or you can ask an oracle. It is no matter. Sine is >> a behavior of real numbers. > > And you never analyse those logs? Anyway, you could, so they are data. How one follows from another? > Sine is not a reasonable example, nor is it a behaviour. It is a > mathematical function, a mapping. How can a mapping keep anything? But a > computer program can keep the number that it passes in to an > implementation of sine, and the number that it gets back. They are data. Let I store all these data in a RDBMS, would in be same data but different sine? Does SELECT y FROM sine WHEN x= behave differently from sine? -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Mar 6, 10:38=A0am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote: > Does SELECT y FROM sine WHEN x=3D behave differently from sine? OK, this is actually a good Turing test. So what function the following query SELECT x FROM sine WHEN y=3D0.5 represent?
Tegiri Nenashi wrote: > On Mar 6, 10:38 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> > wrote: > >>Does SELECT y FROM sine WHEN x= behave differently from sine? > > > OK, this is actually a good Turing test. So what function the > following query > > SELECT x FROM sine WHEN y=0.5 > > represent? arcsine(0.5)
Tegiri Nenashi wrote: > On Mar 6, 10:38 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> > wrote: > >>Does SELECT y FROM sine WHEN x= behave differently from sine? > > > OK, this is actually a good Turing test. So what function the > following query > > SELECT x FROM sine WHEN y=0.5 > > represent? Er, no wait! What are the attributes of sine?
On Mar 6, 11:06=A0am, Bob Badour <bbad...@pei.sympatico.ca> wrote: > Tegiri Nenashi wrote: > > On Mar 6, 10:38 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> > > wrote: > > >>Does SELECT y FROM sine WHEN x=3D behave differently from sine? > > > OK, this is actually a good Turing test. So what function the > > following query > > > SELECT x FROM sine WHEN y=3D0.5 > > > represent? > > arcsine(0.5) Eh, Bob, spoled all the fun -- I didn't doubt you (or anybody on relational side) would solve this easily. I honestly wanted to clarify how capable the other side is...
On Mar 6, 7:20 am, Eric <e...@deptj.demon.co.uk> wrote: > > Sine is not a reasonable example, nor is it a behaviour. It is a > mathematical function, a mapping. What is your definition of "behavior"? How specifically (if at all) does it differ from "sending a message", "invoking a function" and "invoking a method"? Does that distinction (if it exists) exist for methods of value classes? Of classes with mutable state? I am curious. I am also curious to see the degree of agreement between your answers and other comp.object regulars. Marshall
On Mar 6, 9:39=A0am, S Perryman <q...@q.com> wrote: > I am interested in exploring the notion of an 'ADT join' and > how it might be implemented by an OO prog lang. Join is indeed the most interesting operation. In the other message you dismissed projection as being covered by the concept of subclassing. Can you please be more specific? If we remove some [data] attributes, does it mean the resulting "entity" is a subclass. Then consider selection. One might be tempted to reason that it is a selection that makes a subclass by restricting to a subset of objects. The duality between rows and columns (tuples and attributes) IMO has something to do with those pepetuum motion LSP debates. I can suggest that FCA method (which I mentioned in the very beginning of this flame war) is the right foundation for class hierarchy.
Tegiri Nenashi wrote: > On Mar 6, 9:39 am, S Perryman <q...@q.com> wrote: >>I am interested in exploring the notion of an 'ADT join' and >>how it might be implemented by an OO prog lang. > Join is indeed the most interesting operation. > In the other message > you dismissed projection as being covered by the concept of > subclassing. No. Projection is covered by *type substitutability* . > Can you please be more specific? If we remove some [data] > attributes, does it mean the resulting "entity" is a subclass. No, merely that the resulting entity is now deemed to be of another type, substitutable with the original type. type T { x, y, z } Set<T> ts ; Set< type { x, y } > ps = { e IN ts : e.x > 123 } ; The elements of ps are effectively projections of the elements in ts. > Then consider selection. One might be tempted to reason that it is a > selection that makes a subclass by restricting to a subset of > objects. Here you might be talking about what Wegner and Zdonik (1988) termed "subset subtypes" , whereby the set of values applicable to subtype properties is a subset of those applicable to the supertype. Off-hand mental recall of one of their examples : type Person { name : string ; age : [0..long-time] ; } type Employee { name : string ; age : [16..long-time] ; } The set of Employee instances is a subset of the set of Person instances. > The duality between rows and columns (tuples and attributes) IMO has > something to do with those pepetuum motion LSP debates. Not really. The "LSP debates" come from the fact that Liskov/Wing types (and subtype relationships thereof) are governed by semantics such as pre/post/invariant conditions. Conflicts between such definitions cause substitutability violations. But sadly are often claimed/confused as issues of "mutability" etc. Regards, Steven Perryman
![]() |
0 |
![]() |
S Perryman <q@q.com> wrote in news:fqpj0o$lke$1@aioe.org: > Tegiri Nenashi wrote: > >> On Mar 6, 9:39 am, S Perryman <q...@q.com> wrote: > >>>I am interested in exploring the notion of an 'ADT join' and >>>how it might be implemented by an OO prog lang. > >> Join is indeed the most interesting operation. > >> In the other message >> you dismissed projection as being covered by the concept of >> subclassing. > > No. Projection is covered by *type substitutability* . > > >> Can you please be more specific? If we remove some [data] >> attributes, does it mean the resulting "entity" is a subclass. > > No, merely that the resulting entity is now deemed to be of > another type, substitutable with the original type. > > type T > { > x, y, z > } > > Set<T> ts ; > Set< type { x, y } > ps = { e IN ts : e.x > 123 } ; > > The elements of ps are effectively projections of the elements in ts. > Hold on, you have a value of type Set<T> assigned to a variable of type Set<type{x,y}>. How is it even syntactically correct ? Is not assignment of a value of one type to a variable of another type illegal ??? It's, like, totally basic stuff they teach you about typing and such ! Methinks you've made an inadvertent mistake, didn't you ?
On Mar 6, 9:27 am, S Perryman <q...@q.com> wrote: > Marshall wrote: > > > Many good things can be said of this pattern, but it is not set-oriented. > > I am not familiar enough with OCaml to say much. But > > if we were talking SML, (a language I have the utmost > > respect for) I would still not consider it set-oriented. > > map(), filter(), fold(), while providing collection-oriented > > interfaces, are typically not primitive in the language. > > OK. > You regard "X-oriented" as something for which the facilities to support > X are provided at a fundamental level (like arithmetic ops, CAR/CDR in > Lisp etc) and not built (by whoever) from the fundamental constructs of > a prog lang. > > Fair enough. Right. If we consider *potential* built constructs, and we are speaking of general purpose programming languages, then right away all languages collapse into a single category. We might make some allowances for languages whose standard library contains specific constructs, however. We might reasonably classify some functional languages that make extensive use of map, filter, and fold as being at least modestly list-oriented, despite the fact that their list constructs are really recursively defined union types. But it's probably better to consider the recursively defined type together with the higher-order function constructs as being the axiomatic signature of such languages. Another important reason to consider the fundamental constructs of a PL is so that we can treat the PL as an axiomatic system. To me, the most interesting thing one can do with code is *reason* about it. This is why type systems are interesting (and "dynamically typed" languages not so much.) It's also part of why the relational model is so interesting. And it's why OOPLs don't hold my interest so much any more; their axioms are complicated and often weak. > M>Why are you bringing up functional programming when > M>talking about OOPLs? > > >>Because FP has : > >>- been able to support OO quite easily > >>- has an interesting execution infrastructure that IMHO makes it a > >> good candidate for supporting the Relational paradigm > > Hrmmm, well, I have to admit you make a good argument here. > > (Although you must grant me that, we people speak of OOPLs, > > they are not usually thinking CLOS!) > > I would not call it an "argument" as such. > More thinking out loud about candidate technologies for resolving > the impedence problem (conceptually and performance-wise) . By all means, please continue to think out loud. You do it well. > > But I am, rather, speaking of what languages' axiomatic natures are. > > I could, in fact, write a Relation class in Java, and provide set- > > oriented > > methods for it, and a whole host of relational goodness. But that > > wouldn't make Java a relational language per se. > > Virtually no languages have primitive support for anything like a > > collection. SQL and SETL and a few others; that's it. There are > > some languages that were designed from the start with *list* > > processing in mind: lisp (and I should probably also mention the > > APL family here.) There are some *very* interesting things > > in there, but not things I would say could be strictly described > > as set-oriented. > > I take a slightly different view in that I obviously have the need > for various collection types, and support for a Relational "calculus" > to use on those collections. I also need support for collections of > ADTs in particular. The collections question is quite interesting: Lists, maps, bags, sets, tables, trees. It is apparent after only a modest amount of study that maps, sets, and tables are thoroughly and beautifully handled with relations. It took me a lot longer but I've reached the conclusion that lists fall in to the same category. Trees no longer seem like a single category to me: there are what I call "statically structured" trees, for example Customers/ Invoices/InvoiceLineItems, and "dynamically structured" trees, such as a parse tree. Statically structured trees are are a particular strong point for SQL and also a strong point for the RM. Dynamically structured trees *can be* handled with the RM but I don't think it does as good a job as is done in, say, FP languages with union types and structural recursion. I conjecture that it may be possible to develop best practices and/or tiny extensions to the RM such that it does as good a job, but currently I'm leaning in the direction of thinking this will not be possible. I'm not completely thrilled with structural recursion, but I have yet to see anything better. Bags I have not looked at in any detail. They are certainly the least important of the lot, however. > If possible I would like the prog lang user to be able to construct > specific collections such as sets etc, and for the prog lang env to be > within reasonable performance of something that is designed for the > support of some one specific aspect (as Lisp is for lists etc) . > > I think that FP is the paradigm that could possibly do this. An important aspect of the performance requirement is physical independence, something that is largely ignored in PL theory, sadly. FP is an important area for study, no doubt. If we look at the mathematical universe, we usually encounter an extensional viewpoint. And if something is expressed intensionally, or algorithmically, mathematicians are free to immediately think of its extension, because they do not have to limit themselves to what is computable. On the other hand, if we look at the computable universe, we see more often the intensional viewpoint. And we are not free to immediately shift into extensional mode, because of the possibly-infinite, almost-certainly-prohibitive cost of doing so. The RM gives the best handle on the computably extensional viewpoint I have encountered. FP gives the best handle on the intensional viewpoint. There's more to this line of thought, but it's lunchtime! Marshall
Robert Martin wrote: > On 2008-03-05 09:48:45 -0600, Marshall <marshall.spight@gmail.com> said: > > > On Mar 4, 11:05 pm, Robert Martin <uncle...@objectmentor.com> wrote: > >>> > >>> Furthermore, since OOPLs lack physical independence, traversing > >>> the graph may be quite expensive, particularly in the case where > >>> the graph is backed by storage in a database, which is part of > >>> why ORM is such a universally bad idea. > >> > >> No, you have this wrong. ORMs generally use standard SQL queries to > >> traverse and gather data from the DB. Then that data is placed into OO= > >> structures so that the application can take advanage of the bias. > > > > Just the fact that they use SQL isn't sufficient. They have to > > use it as well as a person could, though an interface that > > is generally information-lossy enough (or at least, used in > > a lossy way) that that's impossible. > > Yeah, assembly language programmers used to say the same thing about > compilers. Then the compilers started writing more efficient code than > the assembly language programmers could... > > In any case, good ORMs allow you to tune the SQL, so you *can* use it > as well as a person could. > > > The most gratuitous example I can think of was some early > > EJB containers I played with, back when I was still thinking > > that ORM was something that could possibly be done well. > > Against a table of a few hundred rows, one could execute > > "delete from table". The comparable command through the > > ORM issued SQL to load every row as an object, then > > in a loop called obj.delete() which issued a single DELETE > > statement for that row. It was ten thousand times slower, > > and that's for only a couple hundred rows. Of course this > > example is extreme, but it's still illustrative of a general > > principle. > > It is illustrative of a programmer who either doesn't know his tool or > didn't select a reasonable tool. > > > > I have *often* seen four and five order of magnitude > > performance difference between straight SQL and > > ORM SQL, across a wide variety of ORMs. The > > very idea of ORM demands it: you have to try to > > push a whole set-oriented language through a functional > > interface. > > Bah. You don't *have* to do any such thing. I won't argue that there > aren't programmers and teams who use their tools poorly. The industry is full of horror stories with regard to ORM mappers. They are either a pain in the butt, or expensive to find and hire good ORM experts for. It is a waste of resources to translate back and forth between two fairly high-level paradigms. Embrace the RDB and don't use heavy OOP for domain nouns, and you have less headaches. > > -- > Robert C. Martin (Uncle Bob)=EF=BF=BD=EF=BF=BD| email: unclebob@objectment= or.com > Object Mentor Inc.=EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF= =BD=EF=BF=BD| blog:=EF=BF=BD=EF=BF=BDwww.butunclebob.com -T-
Dmitry A. Kazakov wrote: > On Wed, 5 Mar 2008 14:27:58 -0800 (PST), topmind wrote: > > > Dmitry A. Kazakov wrote: > >> On Wed, 5 Mar 2008 11:34:05 +0000, Eric wrote: > >> > >>> On 2008-03-04, Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote: > >> > >>> I don't believe that merely using an RDBMS will solve all problems. What > >>> I meant was that, accepting what David said above, if you keep your data > >>> in an RDBMS, it will be easily available for the solution of any > >>> possible problem that can be solved using that data. > >> > >> No, this as well is wrong. Keeping "data" in RDBMS puts certain > >> restrictions on what can be stored there and how it can be used later. > > [...] > > A RDMBS > > cannot stop you from doing anything you want to with retrieved data. > > Yes, exactly this is wrong. Is this turning into a strong-typing debate? I am not here to debate types. I am a dynamic/no-typing fan (although they do have their domains where they have a net benefit). > (I hope you don't have in mind retrieving all > content and continuing without RDBMS.) Huh? > > -- > Regards, > Dmitry A. Kazakov > http://www.dmitry-kazakov.de -T-
Tegiri Nenashi wrote: > On Mar 6, 11:06 am, Bob Badour <bbad...@pei.sympatico.ca> wrote: > >>Tegiri Nenashi wrote: >> >>>On Mar 6, 10:38 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> >>>wrote: >> >>>>Does SELECT y FROM sine WHEN x= behave differently from sine? >> >>>OK, this is actually a good Turing test. So what function the >>>following query >> >>>SELECT x FROM sine WHEN y=0.5 >> >>>represent? >> >>arcsine(0.5) > > Eh, Bob, spoled all the fun -- I didn't doubt you (or anybody on > relational side) would solve this easily. I honestly wanted to clarify > how capable the other side is... Actually, I take back my answer. I don't know the header of sine; it could have a header like: sine(x,y,r,theta,sin,cos,tan) where: sin = y/r cos = x/r tan = y/x etc. In that case, the function you gave above is just the set of real numbers or whatever lame approximation of the reals we are using.
Cimode <cimode@hotmail.com> wrote: >On Mar 6, 12:09 pm, Nick Keighley <nick_keighley_nos...@hotmail.com> >wrote: [snip] >> - it maps (almost) 1 to 1 to machine code >What is mapping 1:1 between machine code and assembler. Tis the first >time I hear somebody establishing cardinality between 2 languages. >What a bunch of crap. Nope. Most assembler instructions translate to one machine instruction each. That is the mapping that is being referred to. It is not perfect as there are pseudo-ops and macros, but it holds in general. [snip] Sincerely, Gene Wirchenko Computerese Irregular Verb Conjugation: I have preferences. You have biases. He/She has prejudices.
"Marshall" <marshall.spight@gmail.com> wrote in message news:3847aed1-f962-4751-b7bb-d352d3e9bc24@d21g2000prf.googlegroups.com... > Bags I have not looked at in any detail. They are certainly > the least important of the lot, however. But bags are so very, very easy to implement! This is not intended to be facetious. What I'm driving at is that practical systems that evolve with little theoretical foundation tend to implement bags because it's easier than not implementing bags. An example might be SQL, whose tables represent bags unless good luck, disciplined writing, or declared constraints result in tables that always represent sets.
On Mar 5, 11:19=A0pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-04 17:29:01 -0600, TroyK <cs_tr...@juno.com> said: > > > > > > > On Mar 3, 3:11=A0pm, Robert Martin <uncle...@objectmentor.com> wrote: > >> On 2008-03-03 12:29:02 -0600, TroyK <cs_tr...@juno.com> said: > > >>> My experience is somewhere between 2 and 3 orders of magnitude > >>> difference between implementing a business rules change in the db vs. > >>> the programming team doing it in OO code. > > >> Then you should be able to fly rings around the programmers and get > >> them all fired. =A0Why haven't you? > > > If by "fly rings around the programmers" you mean having a fully > > functioning reference implementation up and running in SQL within 2 > > weeks that ends up taking a team of 3 programmers over 3 months to > > implement in code, then, yeah, I guess I do. But the architecture > > called for the programming to be done in a business layer implemented > > in C# -- we expected and planned for that, so, happily, no one gets > > fired. > > I'm sorry, but are you saying that you blithely allowed your > organization to expend nine man months of fruitless labor? =A0Or are you > saying that there was no way your SQL "reference implementation" could > have been used in production? > > -- > Robert C. Martin (Uncle Bob)=A0=A0| email: uncle...@objectmentor.com > Object Mentor Inc.=A0 =A0 =A0 =A0 =A0 =A0=A0| blog:=A0=A0www.butunclebob.c= om > The Agile Transition Experts=A0=A0| web:=A0=A0=A0www.objectmentor.com > 800-338-6716=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0|- Hide quoted text - > > - Show quoted text - The discussion is about the veracity of the claims of orders of magnitude difference in the expression and implementation of business rules in SQL vs. in "application code". The architectural decisions and guidelines at play in my company, however, are not up for discussion. I'm assuming at this point that you accept the claims made by myself and Roy. If you have experience or data that contradicts these claims, feel free to present them. By the way, I'm specifically talking about measures of code (statements, LOC), not developer productivity (which was Roy's original point). My own observations on the latter are simply an interesting consequence of the former. TroyK
TroyK wrote: >Who said anything about stored procedures? I'm talking about >implementing the business rules via constraint declaration in the >database, and deriving new values throught the application of SQL >queries. Do you also treat dynamic constraints in this way? E.g. "unless otherwise specified, an employee shall get a 2% salary increase after each full year of employment". Or: "two movable 3D shapes shall never collide; collision is avoided by the following repositioning algorithm that is guaranteed to pull all objects apart that get too close: (...), however, this algorithm may be replaced with another that has the same property". -- Reinier
Marshall wrote: >On Mar 5, 7:58 am, "H. S. Lahman" <h...@pathfindermda.com> wrote: >> >> ... the relational model is a subset of the OO foundation ... > >Please describe where JOIN and UNION for example, >are to be found in the OO foundation. Or in any OO language. Generally, it requires a combination of OO with functional idioms (higher order functions, closures). Today's .NET languages support that; so do more dynamic languages such as Smalltalk, Perl, Python or Ruby; C++ templates support it. In Java we still have to fake them with inner classes, I believe. Of course there is still a difference: JOIN and UNION are symmetric, assuming a helicopter perspective in which the data manipulator has full access to all of the data, while the OO world is inherently asymmetric (code can only work see data explicitly exposed to the class the code is in). -- Reinier
On 6 mar, 22:20, Gene Wirchenko <ge...@ocis.net> wrote: > Cimode <cim...@hotmail.com> wrote: [Snipped] > Nope. Most assembler instructions translate to one machine > instruction each. That is the mapping that is being referred to. It > is not perfect as there are pseudo-ops and macros, but it holds in > general. So ? I am not blaming the *mapping* but rather the entire sloppy reasonning behind. (Water has oxygen, we breethe oxygen, therefore we could breethe water.) Do you think that the fact a physical *mov* is physicaly matching an ram adress memory content the content of another is sufficient to state an overgeneralized and simplyistic conclusion that * just as assembler "mismatches" objects so objects "mismatch" relations * Nothing but an simplyistic attempt to draw a vague abstract from a physical behavior when using assembler. > > Sincerely, > > Gene Wirchenko > > Computerese Irregular Verb Conjugation: > I have preferences. > You have biases. > He/She has prejudices.
Cimode <cimode@hotmail.com> wrote: >On 6 mar, 22:20, Gene Wirchenko <ge...@ocis.net> wrote: >> Cimode <cim...@hotmail.com> wrote: >[Snipped] >> Nope. Most assembler instructions translate to one machine >> instruction each. That is the mapping that is being referred to. It >> is not perfect as there are pseudo-ops and macros, but it holds in >> general. >So ? I am not blaming the *mapping* but rather the entire sloppy >reasonning behind. (Water has oxygen, we breethe oxygen, therefore we >could breethe water.) Your complaint about mapping was: "What is mapping 1:1 between machine code and assembler. Tis the first time I hear somebody establishing cardinality between 2 languages. What a bunch of crap." I replied to that. >Do you think that the fact a physical *mov* is physicaly matching an >ram adress memory content the content of another is sufficient to >state an overgeneralized and simplyistic conclusion that > >* just as assembler "mismatches" objects > so objects "mismatch" relations * > >Nothing but an simplyistic attempt to draw a vague abstract from a >physical behavior when using assembler. I can not parse the above. Sincerely, Gene Wirchenko Computerese Irregular Verb Conjugation: I have preferences. You have biases. He/She has prejudices.
Gene Wirchenko wrote: > Cimode <cimode@hotmail.com> wrote: > > >>On 6 mar, 22:20, Gene Wirchenko <ge...@ocis.net> wrote: >> >>>Cimode <cim...@hotmail.com> wrote: >> >>[Snipped] >> >>> Nope. Most assembler instructions translate to one machine >>>instruction each. That is the mapping that is being referred to. It >>>is not perfect as there are pseudo-ops and macros, but it holds in >>>general. >> >>So ? I am not blaming the *mapping* but rather the entire sloppy >>reasonning behind. (Water has oxygen, we breethe oxygen, therefore we >>could breethe water.) > > > Your complaint about mapping was: > "What is mapping 1:1 between machine code and assembler. Tis the > first time I hear somebody establishing cardinality between 2 > languages. What a bunch of crap." > > I replied to that. > > >>Do you think that the fact a physical *mov* is physicaly matching an >>ram adress memory content the content of another is sufficient to >>state an overgeneralized and simplyistic conclusion that >> >>* just as assembler "mismatches" objects >> so objects "mismatch" relations * >> >>Nothing but an simplyistic attempt to draw a vague abstract from a >>physical behavior when using assembler. > > > I can not parse the above. That loud whooshing sound you heard was the point travelling effortlessly and at great speed between Cimode's ears.
On Mar 8, 5:38 am, rp...@pcwin518.campus.tue.nl (rpost) wrote: > Marshall wrote: > >On Mar 5, 7:58 am, "H. S. Lahman" <h...@pathfindermda.com> wrote: > > >> ... the relational model is a subset of the OO foundation ... > > >Please describe where JOIN and UNION for example, > >are to be found in the OO foundation. Or in any OO language. > > Generally, it requires a combination of OO with functional idioms > (higher order functions, closures). Today's .NET languages support that; > so do more dynamic languages such as Smalltalk, Perl, Python or Ruby; > C++ templates support it. In Java we still have to fake them with > inner classes, I believe. Well, I didn't say one couldn't code up JOIN and UNION in an OOPL. For example in Java, Collections.addAll() as used in classes implementing the Set interface is UNION. We are approaching the edge of the Turing tarpit here. My point was that JOIN and UNION are not to be found in the *foundations* of OOPLs, so the claim that "the relational model is a subset of the OO foundation" is false. That OOPLs are Turing-complete and hence can implement any computable function that could be expressed in the RM is the degenerate case; it is true but it reveals nothing. One could as well say that the RM is a subset of the foundations of finite state automata, or that the lambda calculus is a subset of the OO foundation. Or whatever. Marshall
On 8 mar, 23:11, Gene Wirchenko <ge...@ocis.net> wrote: [Snipped] > >Nothing but an simplyistic attempt to draw a vague abstract from a > >physical behavior when using assembler. > > I can not parse the above. Sorry. I should have been more explicit. I perfectly understand the relationship between assembler to machine language. I just do not see how one could use the term *cardinality* for that. I may seem picky on terminology but I do believe it is an apprpriate term to use. And even if was to admit that it would be a good term and one could see some form of cardinality between languages, quite frankly I don't see how one can jump from that to the previous conclusion. That was my point. > Sincerely, > > Gene Wirchenko > > Computerese Irregular Verb Conjugation: > I have preferences. > You have biases. > He/She has prejudices.
On 2008-03-06 06:37:19 -0600, JOG <jog@cs.nott.ac.uk> said: > Using nonsense words like 'subscope' doesn't do conversation any > favours Robert. A sub scope is a scope that lives within another scope. The simplest example is the following C code. { // parent scope int i; int j; { // sub scope int i; } } -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-06 06:37:19 -0600, JOG <jog@cs.nott.ac.uk> said: > On Mar 6, 6:26 am, Robert Martin <uncle...@objectmentor.com> wrote: >> That's not inferrence. > > Well that all jolly-well looks like inference to me. I agree that you can infer what elements a subclass has by knowing the elements of the base class. However, that's YOU making the inferrence. The inheritance itself is the redeclaration of variables and functions of the base to the (sub)scope of the derived class. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
Robert Martin wrote: > On 2008-03-06 06:37:19 -0600, JOG <jog@cs.nott.ac.uk> said: > >> Using nonsense words like 'subscope' doesn't do conversation any >> favours Robert. > > A sub scope is a scope that lives within another scope. The simplest > example is the following C code. > > > { // parent scope > int i; > int j; > { // sub scope > int i; > } > } If high-falutin' lingo like 'lives within another scope' passes for explanation in OO circles no wonder they are perpetuate other babble besides. A newcomer would profit not a bit from the above but a lot from an explanation of a logical stack, which is a simple and clear runtime mechanism (regardless of whether a hardware stack or heap is used). Then he could call it what he wants, once he understands what is really happening on a typical machine. Why are the regulars here encouraging these cross-posters and 'machine as animal' mystics?
![]() |
0 |
![]() |
On Mar 9, 2:23 am, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-06 06:37:19 -0600, JOG <j...@cs.nott.ac.uk> said: > > > On Mar 6, 6:26 am, Robert Martin <uncle...@objectmentor.com> wrote: > >> That's not inferrence. > > > Well that all jolly-well looks like inference to me. > > I agree that you can infer what elements a subclass has by knowing the > elements of the base class. Well then you have conceded defeat, given our conversation has gone as follows: jog: "So why not treat all 'inheritance' in this way [as inference]?" Robert: "Because all inheritance is not about inference." jog: show me an example when it is not. Robert: I'll ignore that, and just describe the OO mechanism again. jog: Look, here is how everything you have listed could have been described through inference Robert: Ok, I agree that all inheritance can be described via inference. I guess we are done, your objection having been overcome. > However, that's YOU making the inferrence. This was a non-sequitur. I think because of your OO focus you haven't recognized that (in this case) there is a danger of putting the cart before the horse. Logic first, mechanism second, not t'other way around. Regards, J. > The inheritance itself is the redeclaration of variables and > functions of the base to the (sub)scope of the derived class. > > -- > Robert C. Martin (Uncle Bob) | email: uncle...@objectmentor.com > Object Mentor Inc. | blog: www.butunclebob.com > The Agile Transition Experts | web: www.objectmentor.com > 800-338-6716 |
On Mar 8, 7:39 pm, JOG <j...@cs.nott.ac.uk> wrote: > > Logic first, mechanism second Brilliant! And in just four words, too. I salute you sir! Marshall
On Mar 8, 7:59 pm, paul c <toledoby...@ac.ooyah> wrote: > Robert Martin wrote: > > >> Using nonsense words like 'subscope' doesn't do conversation any > >> favours Robert. > > > A sub scope is a scope that lives within another scope. The simplest > > example is the following C code. > > > { // parent scope > > int i; > > int j; > > { // sub scope > > int i; > > } > > } > > If high-falutin' lingo like 'lives within another scope' passes for > explanation in OO circles no wonder they are perpetuate other babble > besides. Well, I dunno. "Scope" is a pretty well-established term. I could pick at the anthropomorphism, but I'd just be picking. I'd say it is more usual to say "nested scope" but it's clear enough to me that's what he means with "subscope." Marshall
![]() |
0 |
![]() |
Marshall wrote: .... > Well, I dunno. "Scope" is a pretty well-established term. I could pick > at the anthropomorphism, but I'd just be picking. I'd say it is more > usual > to say "nested scope" but it's clear enough to me that's what he means > with "subscope." Well, it's not clear to me what living in a scope could possibly mean, except of course for pure mumbo-jumbo.
![]() |
0 |
![]() |
Yagotta B. Kidding wrote: > S Perryman <q@q.com> wrote in news:fqpj0o$lke$1@aioe.org: TN>In the other message TN>you dismissed projection as being covered by the concept of TN>subclassing. SP>No. Projection is covered by *type substitutability* . TN>Can you please be more specific? If we remove some [data] TN>attributes, does it mean the resulting "entity" is a subclass. >>No, merely that the resulting entity is now deemed to be of >>another type, substitutable with the original type. >>type T >>{ >> x, y, z >>} >> >>Set<T> ts ; >>Set< type { x, y } > ps = { e IN ts : e.x > 123 } ; >>The elements of ps are effectively projections of the elements in ts. > Hold on, you have a value of type Set<T> assigned to a variable of type > Set<type{x,y}>. Basic type substitutability (structural equivalence) . The { x, y } type has properties in common with T (the x/y properties) . Therefore the assignment is legal. So the assignment is a projection is of a type (x,y,z) to a type (x,y) . Regards, Steven Perryman
![]() |
0 |
![]() |
[Snipped] > Sorry. I should have been more explicit. I perfectly understand the > relationship between assembler to machine language. I just do not see > how one could use the term *cardinality* for that. I may seem picky > on terminology but I do believe it is an apprpriate term to use. Typo. Sorry. I meant but I dot *not* believe it is an appropriate term to use. One word can make an entire difference. [Snipped]
"Marshall" <marshall.spight@gmail.com> wrote in message news:35dfaf3a-6a60-41f1-b533-928211aae429@s8g2000prg.googlegroups.com... > On Mar 8, 7:39 pm, JOG <j...@cs.nott.ac.uk> wrote: > > > > Logic first, mechanism second > > Brilliant! And in just four words, too. I salute you sir! > I agree. This one is a keeper. Any room for it in the glossary?
"paul c" <toledobysea@ac.ooyah> wrote in message news:AAMAj.63950$w94.63399@pd7urf2no... > Marshall wrote: > ... > > Well, I dunno. "Scope" is a pretty well-established term. I could pick > > at the anthropomorphism, but I'd just be picking. I'd say it is more > > usual > > to say "nested scope" but it's clear enough to me that's what he means > > with "subscope." > > > Well, it's not clear to me what living in a scope could possibly mean, > except of course for pure mumbo-jumbo. I disagree. It's clear enough for discussion purposes. A value stored in a variable lives in the scope of that variable, until it's copied (forwarded) to another place.
![]() |
0 |
![]() |
S Perryman <q@q.com> wrote in news:fr0hft$8gf$1@aioe.org: > Yagotta B. Kidding wrote: > >> S Perryman <q@q.com> wrote in news:fqpj0o$lke$1@aioe.org: > > TN>In the other message > TN>you dismissed projection as being covered by the concept of > TN>subclassing. > > SP>No. Projection is covered by *type substitutability* . > > TN>Can you please be more specific? If we remove some [data] > TN>attributes, does it mean the resulting "entity" is a subclass. > >>>No, merely that the resulting entity is now deemed to be of >>>another type, substitutable with the original type. > >>>type T >>>{ >>> x, y, z >>>} >>> >>>Set<T> ts ; >>>Set< type { x, y } > ps = { e IN ts : e.x > 123 } ; > >>>The elements of ps are effectively projections of the elements in ts. > >> Hold on, you have a value of type Set<T> assigned to a variable of >> type Set<type{x,y}>. > > Basic type substitutability (structural equivalence) . > The { x, y } type has properties in common with T (the x/y properties) > . Therefore the assignment is legal. > > So the assignment is a projection is of a type (x,y,z) to a type (x,y) I think you are confused at least on two counts. In any OO langauage I am familiar wth, you can assign a subtype value to a supertype variable. However, you did not establish any subtype/supertype relationship in your example, therefore, the assignment is illegal. Even when you establish such relationship, no projection really takes place, the information is still there, it's just hidden, as can easily be seen by casting back to the original subtype. .. > > > Regards, > Steven Perryman >
On Mar 9, 9:13 am, "Yagotta B. Kidding" <y...@mymail.com> wrote: > S Perryman <q...@q.com> wrote innews:fr0hft$8gf$1@aioe.org: > > Yagotta B. Kidding wrote: > >> S Perryman <q...@q.com> wrote innews:fqpj0o$lke$1@aioe.org: > > > TN>In the other message > > TN>you dismissed projection as being covered by the concept of > > TN>subclassing. > > > SP>No. Projection is covered by *type substitutability* . > > > TN>Can you please be more specific? If we remove some [data] > > TN>attributes, does it mean the resulting "entity" is a subclass. > > >>>No, merely that the resulting entity is now deemed to be of > >>>another type, substitutable with the original type. > > >>>type T > >>>{ > >>> x, y, z > >>>} > > >>>Set<T> ts ; > >>>Set< type { x, y } > ps = { e IN ts : e.x > 123 } ; > > >>>The elements of ps are effectively projections of the elements in ts. > > >> Hold on, you have a value of type Set<T> assigned to a variable of > >> type Set<type{x,y}>. > > > Basic type substitutability (structural equivalence) . > > The { x, y } type has properties in common with T (the x/y properties) > > . Therefore the assignment is legal. > > > So the assignment is a projection is of a type (x,y,z) to a type (x,y) > > In any OO langauage I am familiar wth, you can assign a subtype value > to a supertype variable. However, you did not establish any > subtype/supertype relationship in your example, therefore, the > assignment is illegal. If the system being discussed is based on structural rather than nominal types (which he explicitly stated, although not in a very obvious way) then it is sufficient to compare the *structure* of two types to determine if one is a subtype of the other. In this case, attributes {x, y} are a subset of {x, y, z} and so a subtype relationship exists. No, this is not how Java or C++ work; OCaml maybe? > Even when you establish such relationship, no projection really takes > place, the information is still there, it's just hidden, as can > easily be seen by casting back to the original subtype. Well, this is approximately true, but it's not clear how important it is. What does it mean *logically* for a projection to "really take place?" What if the language doesn't have upcasting? Note that we can translate Mr. Perryman's code into SQL in a fairly straightforward way and preserve all the properties he wants from his example. Which is unsurprising since SQL is structurally typed. Marshall
Yagotta B. Kidding wrote: > S Perryman <q@q.com> wrote in news:fr0hft$8gf$1@aioe.org: YK>Hold on, you have a value of type Set<T> assigned to a variable of YK>type Set<type{x,y}>. >>Basic type substitutability (structural equivalence) . >>The { x, y } type has properties in common with T (the x/y properties) >>. Therefore the assignment is legal. >>So the assignment is a projection is of a type (x,y,z) to a type (x,y) > I think you are confused at least on two counts. > In any OO langauage I am familiar wth, you can assign a subtype value > to a supertype variable. > However, you did not establish any > subtype/supertype relationship in your example, therefore, the > assignment is illegal. 1. You are correct, if you use the *Simula* model of type substitutability (ie Y is substitutable for X if there is an inheritance lineage from Y to X) . 2. There are OO prog langs, strongly and weakly typed, that are not forced to use the Simula model. > Even when you establish such relationship, no projection really takes > place, the information is still there, it's just hidden, as can > easily be seen by casting back to the original subtype. Is this not an implementation aspect, as opposed to a conceptual one ?? There could be prog langs that forbid coercion back the other way etc (even if the underlying impl was using a "hidden" approach) . Regards, Steven Perryman
![]() |
0 |
![]() |
"David Cressey" <cressey73@verizon.net> wrote: [snip] >I disagree. It's clear enough for discussion purposes. A value stored in a >variable lives in the scope of that variable, until it's copied (forwarded) >to another place. Well, no, it does not. A variable has its value during its LIFETIME. It can be referred to in its SCOPE. These can be the same for a variable, but they can be different. Sincerely, Gene Wirchenko Computerese Irregular Verb Conjugation: I have preferences. You have biases. He/She has prejudices.
![]() |
0 |
![]() |
Gene Wirchenko wrote: > "David Cressey" <cressey73@verizon.net> wrote: > > [snip] > > >>I disagree. It's clear enough for discussion purposes. A value stored in a >>variable lives in the scope of that variable, until it's copied (forwarded) >>to another place. > > Well, no, it does not. A variable has its value during its > LIFETIME. It can be referred to in its SCOPE. These can be the same > for a variable, but they can be different. Even that is not always true. Sometimes a variable can be referred to outside its scope by using a scope qualifier. Sometimes not.
![]() |
0 |
![]() |
S Perryman <q@q.com> wrote in news:fr1kav$1su$1@aioe.org: > Yagotta B. Kidding wrote: > >> S Perryman <q@q.com> wrote in news:fr0hft$8gf$1@aioe.org: > > YK>Hold on, you have a value of type Set<T> assigned to a variable of > YK>type Set<type{x,y}>. > >>>Basic type substitutability (structural equivalence) . >>>The { x, y } type has properties in common with T (the x/y >>>properties) . Therefore the assignment is legal. > >>>So the assignment is a projection is of a type (x,y,z) to a type >>>(x,y) > >> I think you are confused at least on two counts. > >> In any OO langauage I am familiar wth, you can assign a subtype value >> to a supertype variable. > >> However, you did not establish any >> subtype/supertype relationship in your example, therefore, the >> assignment is illegal. > > 1. You are correct, if you use the *Simula* model of type > substitutability > (ie Y is substitutable for X if there is an inheritance lineage > from Y to X) . > > 2. There are OO prog langs, strongly and weakly typed, that are not > forced to use the Simula model. > That's a good point. > >> Even when you establish such relationship, no projection really takes >> place, the information is still there, it's just hidden, as can >> easily be seen by casting back to the original subtype. > > Is this not an implementation aspect, as opposed to a conceptual one > ?? There could be prog langs that forbid coercion back the other way > etc (even if the underlying impl was using a "hidden" approach) . > I think you are right about the implementation aspect and I agree that the structural supertype assignment can be viewed as a projection. > > Regards, > Steven Perryman
Marshall <marshall.spight@gmail.com> wrote in news:b5ab6022-5cea-45a1- 9e64-fe2b3a04e8de@e6g2000prf.googlegroups.com: > If the system being discussed is based on structural rather > than nominal types (which he explicitly stated, although not > in a very obvious way) then it is sufficient to compare the > *structure* of two types to determine if one is a subtype of > the other. In this case, attributes {x, y} are a subset of > {x, y, z} and so a subtype relationship exists. No, this is > not how Java or C++ work; OCaml maybe? > Yes, you are right about OCAML. I forgot that it indeed uses structural subtyping. > >> Even when you establish such relationship, no projection really takes >> place, the information is still there, it's just hidden, as can >> easily be seen by casting back to the original subtype. > > Well, this is approximately true, but it's not clear how > important it is. What does it mean *logically* for a projection > to "really take place?" What if the language doesn't have > upcasting? > > Note that we can translate Mr. Perryman's code into SQL > in a fairly straightforward way and preserve all the > properties he wants from his example. Which is unsurprising > since SQL is structurally typed. Superficially, you can perform the translation, but the assignment required to imitate the projection makes any such language not referentially transparent. You may consider the non-transparency a non- issue of course. > > > Marshall
On Mar 9, 7:25 pm, "Yagotta B. Kidding" <y...@mymail.com> wrote: > Marshall <marshall.spi...@gmail.com> wrote in news:b5ab6022-5cea-45a1- > > Superficially, you can perform the translation, but the assignment > required to imitate the projection makes any such language not > referentially transparent. You may consider the non-transparency a non- > issue of course. Um, how so? I'm not sure I see what you mean. Marshall
Gene Wirchenko wrote: > David Cressey wrote: > > [snip] > >> I disagree. It's clear enough for discussion purposes. A value stored in a >> variable lives in the scope of that variable, until it's copied (forwarded) >> to another place. > > Well, no, it does not. A variable has its value during its > LIFETIME. It can be referred to in its SCOPE. These can be the same > for a variable, but they can be different. The 'life' a variable hase outside it's scope is half-life at best. The variable does occupy resources, but in order to be useful in any way it has to be in scope.
![]() |
0 |
![]() |
Marshall <marshall.spight@gmail.com> wrote in news:95abb6fb-e5e1-4604-b0f9-9fe3d5bcfadc@e10g2000prf.googlegroups.com: > On Mar 9, 7:25 pm, "Yagotta B. Kidding" <y...@mymail.com> wrote: >> Marshall <marshall.spi...@gmail.com> wrote in >> news:b5ab6022-5cea-45a1- >> >> Superficially, you can perform the translation, but the assignment >> required to imitate the projection makes any such language not >> referentially transparent. You may consider the non-transparency a >> non- issue of course. > > Um, how so? I'm not sure I see what you mean. > > > Marshall > "A language that supports the concept that ``equals can be substituted for equals'' in an expresssion without changing the value of the expression is said to be referentially transparent. Referential transparency is violated when we include set! in our computer language. This makes it tricky to determine when we can simplify expressions by substituting equivalent expressions. Consequently, reasoning about programs that use assignment becomes drastically more difficult." http://mitpress.mit.edu/sicp/full-text/sicp/book/node54.html
"mAsterdam" <mAsterdam@vrijdag.org> wrote in message news:47d4e9ac$0$14352$e4fe514c@news.xs4all.nl... > Gene Wirchenko wrote: > > David Cressey wrote: > > > > [snip] > > > >> I disagree. It's clear enough for discussion purposes. A value stored in a > >> variable lives in the scope of that variable, until it's copied (forwarded) > >> to another place. > > > > Well, no, it does not. A variable has its value during its > > LIFETIME. It can be referred to in its SCOPE. These can be the same > > for a variable, but they can be different. > > The 'life' a variable hase outside it's scope is half-life at best. > The variable does occupy resources, but in order to be useful in > any way it has to be in scope. > > In addition, the lifetime of the association betwen a variable and a given value is different from the lifetime of the variable itself.
![]() |
0 |
![]() |
On Mar 10, 5:30 am, "Yagotta B. Kidding" <y...@mymail.com> wrote: > Marshall <marshall.spi...@gmail.com> wrote > > On Mar 9, 7:25 pm, "Yagotta B. Kidding" <y...@mymail.com> wrote: > > >> Superficially, you can perform the translation, but the assignment > >> required to imitate the projection makes any such language not > >> referentially transparent. You may consider the non-transparency a > >> non- issue of course. > > > Um, how so? I'm not sure I see what you mean. > > "A language that supports the concept that ``equals can be substituted > for equals'' in an expresssion without changing the value of the > expression is said to be referentially transparent. Referential > transparency is violated when we include set! in our computer language. > This makes it tricky to determine when we can simplify expressions by > substituting equivalent expressions. Consequently, reasoning about > programs that use assignment becomes drastically more difficult." Dude? I'm a language junkie with an interest in FP. It wasn't the term "referential transparency" I was having trouble with. :-) Rather, I don't see where the assignment comes in. Particularly in discussing SQL, which, strictly speaking, lacks assignment. (Although it does have other imperative operators, yes.) And Mr. Perryman's example was written in a single-assignment, mathematical style. > http://mitpress.mit.edu/sicp/full-text/sicp/book/node54.html Ah, SICP. Did you know that David Cressey's name appears in it? Apparently he was at ground zero in the early days of garbage collection, and coined a term that is still used. I told that story when my languages discussion group had Mr. Abelson for lunch. Marshall PS. Name dropping alert!!! I try to resist, but I am weak.
"Marshall" <marshall.spight@gmail.com> wrote in message news:a8e31977-9f84-4c58-bca3-e0aa4e855676@s19g2000prg.googlegroups.com... > On Mar 10, 5:30 am, "Yagotta B. Kidding" <y...@mymail.com> wrote: > > Marshall <marshall.spi...@gmail.com> wrote > > > On Mar 9, 7:25 pm, "Yagotta B. Kidding" <y...@mymail.com> wrote: > > > > >> Superficially, you can perform the translation, but the assignment > > >> required to imitate the projection makes any such language not > > >> referentially transparent. You may consider the non-transparency a > > >> non- issue of course. > > > > > Um, how so? I'm not sure I see what you mean. > > > > "A language that supports the concept that ``equals can be substituted > > for equals'' in an expresssion without changing the value of the > > expression is said to be referentially transparent. Referential > > transparency is violated when we include set! in our computer language. > > This makes it tricky to determine when we can simplify expressions by > > substituting equivalent expressions. Consequently, reasoning about > > programs that use assignment becomes drastically more difficult." > > Dude? I'm a language junkie with an interest in FP. It wasn't the > term "referential transparency" I was having trouble with. :-) > > Rather, I don't see where the assignment comes in. Particularly > in discussing SQL, which, strictly speaking, lacks assignment. > (Although it does have other imperative operators, yes.) And > Mr. Perryman's example was written in a single-assignment, > mathematical style. > Oh? I thought that the SET clause of the UPDATE statement was an assignment in disguise. > > http://mitpress.mit.edu/sicp/full-text/sicp/book/node54.html > > Ah, SICP. Did you know that David Cressey's name appears > in it? Apparently he was at ground zero in the early days of > garbage collection, and coined a term that is still used. I told > that story when my languages discussion group had > Mr. Abelson for lunch. > > > Marshall > > PS. Name dropping alert!!! I try to resist, but I am weak.
"Marshall" <marshall.spight@gmail.com> wrote in message news:a8e31977-9f84-4c58-bca3-e0aa4e855676@s19g2000prg.googlegroups.com... > > http://mitpress.mit.edu/sicp/full-text/sicp/book/node54.html > > Ah, SICP. Did you know that David Cressey's name appears > in it? Apparently he was at ground zero in the early days of > garbage collection, and coined a term that is still used. I told > that story when my languages discussion group had > Mr. Abelson for lunch. I don't think of 1971, when I wrote the garbage collector for MDL as "the early days of garbage collection." The "early days would have been in the 1950s when McCarthy wrote the garbage collector for LISP. The MDL garbage collector was more ambitious than the Lisp garbage collector. But I had the advantage of learning from the work that had already been done. What made the MDL garbage collector more ambitious was that we supported a data structure called a "vector" that was addressable via the PDP-10 address arithmetic. The use of a random access structure was 1000 times faster than the use of a linked list, for some of the things we wanted to do. The use of hardware address arithmetic made it even faster. The consequence of support for vectors was that the garbage collector had to not only reclaim unused space, but defragment it as well. Somebody who came along after me got a PhD by rewriting the MDL garbage collector so that it could do its work in the background, without suspending all the MDL execution threads. I have no idea how such a thing would have worked. > > > Marshall > > PS. Name dropping alert!!! I try to resist, but I am weak. Thanks! Normally the only people who drop my name are the people who cull mailing lists of "important people".
On Mar 10, 9:55 am, "David Cressey" <cresse...@verizon.net> wrote: > "Marshall" <marshall.spi...@gmail.com> wrote in message > > > Rather, I don't see where the assignment comes in. Particularly > > in discussing SQL, which, strictly speaking, lacks assignment. > > (Although it does have other imperative operators, yes.) And > > Mr. Perryman's example was written in a single-assignment, > > mathematical style. > > Oh? > > I thought that the SET clause of the UPDATE statement was an assignment in > disguise. Roughly speaking, sure. Another way to say approximately what you just said would be to say "[SQL] does have other imperative operators, yes." :-) Marshall
Marshall wrote: > On Mar 10, 5:30 am, "Yagotta B. Kidding" <y...@mymail.com> wrote: > >>Marshall <marshall.spi...@gmail.com> wrote >> >>>On Mar 9, 7:25 pm, "Yagotta B. Kidding" <y...@mymail.com> wrote: [snip] > had > Mr. Abelson for lunch. Cannibals!
On Mar 10, 10:12 am, Bob Badour <bbad...@pei.sympatico.ca> wrote: > Marshall wrote: > > > had Mr. Abelson for lunch. > > Cannibals! With some fava beans, and a nice Chianti. thp-thp-thp-thp-thp-thp-thp! Marshall
Marshall <marshall.spight@gmail.com> wrote in news:a8e31977-9f84-4c58-bca3-e0aa4e855676@s19g2000prg.googlegroups.com: > On Mar 10, 5:30 am, "Yagotta B. Kidding" <y...@mymail.com> wrote: >> Marshall <marshall.spi...@gmail.com> wrote >> > On Mar 9, 7:25 pm, "Yagotta B. Kidding" <y...@mymail.com> wrote: >> >> >> Superficially, you can perform the translation, but the >> >> assignment required to imitate the projection makes any such >> >> language not referentially transparent. You may consider the >> >> non-transparency a non- issue of course. >> >> > Um, how so? I'm not sure I see what you mean. >> >> "A language that supports the concept that ``equals can be >> substituted for equals'' in an expresssion without changing the value >> of the expression is said to be referentially transparent. >> Referential transparency is violated when we include set! in our >> computer language. This makes it tricky to determine when we can >> simplify expressions by substituting equivalent expressions. >> Consequently, reasoning about programs that use assignment becomes >> drastically more difficult." > > Dude? I'm a language junkie with an interest in FP. It wasn't the > term "referential transparency" I was having trouble with. :-) > > Rather, I don't see where the assignment comes in. You don't ? >And > Mr. Perryman's example was written in a single-assignment, > mathematical style. And yet you still do not see where the assignment comes in in Mr. Perryman's example.
David Cressey wrote: .... > > I thought that the SET clause of the UPDATE statement was an assignment in > disguise. > ... I would have thought UPDATE itself is assignment in disguise and SET is just an arbitrary language device, just as scope operators don't seem to have much to do with database theory, ;). (I admit I have no idea whether UPDATE without SET is allowed.) Regarding SICP, I think somewhere it states that FP languages don't have assignment, which I didn't see here, at least in the few OO cross-posts that I accidentally read. As usual when replying to David C, no offence but can't resist mentioning that Date has claimed that the ACID property that SQL vendors tout is basically incomplete (my words, not his), I think he might have been including SQL's UPDATE when he said that. Regarding scope, I even question some of the deep thinkers, such as Pascal, when they seem to imply that rdbms 'logic' requires persistence. Maybe that's misrepresenting them but I've never seen any need for an algebra or calculus per se to include operators that have to do with persistence, as practical as they might be. For me, the greatest value of relational theory is from the point of view of the user and how he can test results in his head with merely a basic understanding of project and join and/or traditional first-order logic and without such users needing to concern themselves with how OO or machine techniques choose to obtain those results.
![]() |
0 |
![]() |
On Mar 10, 10:47 am, "Yagotta B. Kidding" <y...@mymail.com> wrote: > > Mr. Perryman's example was written in a single-assignment, > > mathematical style. > > And yet you still do not see where the assignment comes in in Mr. > Perryman's example. You understand that single-assignment is different from assignment, right? You know that single-assignment doesn't cause any problems with referential transparency, right? Well, we're into a subpoint of a subpoint of a subpoint. Perhaps a refreshing Snapple is in order! Marshall
On Mar 8, 7:09=A0am, rp...@pcwin518.campus.tue.nl (rpost) wrote: > TroyK =A0wrote: > >Who said anything about stored procedures? I'm talking about > >implementing the business rules via constraint declaration in the > >database, and deriving new values throught the application of SQL > >queries. > > Do you also treat dynamic constraints in this way? > > E.g. "unless otherwise specified, an employee shall get a 2% > salary increase after each full year of employment". > Or: "two movable 3D shapes shall never collide; collision is avoided > by the following repositioning algorithm that is guaranteed > to pull all objects apart that get too close: (...), however, this > algorithm may be replaced with another that has the same property". > > -- > Reinier Inferring the definition of "dynamic constraint" from your examples, in the first, I think that it would be sufficient to model and record whatever attribute means "otherwise specified" and include that as part of the condition in the constraint declaration. I haven't had occasion to apply the technique to anything more complex than such an example, so I'll have to say "I don't know" to the 3D example. TroyK
On Mar 10, 3:56=A0pm, TroyK <cs_tr...@juno.com> wrote: > On Mar 8, 7:09=A0am, rp...@pcwin518.campus.tue.nl (rpost) wrote: > > > > > > > TroyK =A0wrote: > > >Who said anything about stored procedures? I'm talking about > > >implementing the business rules via constraint declaration in the > > >database, and deriving new values throught the application of SQL > > >queries. > > > Do you also treat dynamic constraints in this way? > > > E.g. "unless otherwise specified, an employee shall get a 2% > > salary increase after each full year of employment". > > Or: "two movable 3D shapes shall never collide; collision is avoided > > by the following repositioning algorithm that is guaranteed > > to pull all objects apart that get too close: (...), however, this > > algorithm may be replaced with another that has the same property". > > > -- > > Reinier > > Inferring the definition of "dynamic constraint" from your examples, > in the first, I think that it would be sufficient to model and record > whatever attribute means "otherwise specified" and include that as > part of the condition in the constraint declaration. > > I haven't had occasion to apply the technique to anything more complex > than such an example, so I'll have to say "I don't know" to the 3D > example. > > TroyK- Hide quoted text - > > - Show quoted text - Sorry to self-reply, just wanted to state that the generalism behind what I'm trying to describe is to implement exceptions to the rule as nothing more than another rule. TroyK
TroyK wrote: > On Mar 8, 7:09 am, rp...@pcwin518.campus.tue.nl (rpost) wrote: > >>TroyK wrote: >> >>>Who said anything about stored procedures? I'm talking about >>>implementing the business rules via constraint declaration in the >>>database, and deriving new values throught the application of SQL >>>queries. >> >>Do you also treat dynamic constraints in this way? >> >>E.g. "unless otherwise specified, an employee shall get a 2% >>salary increase after each full year of employment". with not_excluded_ee = ( employees join ( employees{ee#} minus excluded_ees{ee#} ) ), increases = extend ( not_excluded_ee where is_anniversary_period(pay_period(now),hire_date) ) add increase = salary * percent(2) .... >>Or: "two movable 3D shapes shall never collide; collision is avoided >>by the following repositioning algorithm that is guaranteed >>to pull all objects apart that get too close: (...), however, this >>algorithm may be replaced with another that has the same property". with other_objects = objects rename all prepending "other_" , collisions = objects join other_objects where rank_attrib < other_rank_attrib and min_distance(polygons,other_polygons) < threshold_distance , while exists(collisions) { update collisions set position = ... , other_position = ... } >> >>-- >>Reinier > > Inferring the definition of "dynamic constraint" from your examples, > in the first, I think that it would be sufficient to model and record > whatever attribute means "otherwise specified" and include that as > part of the condition in the constraint declaration. What was the constraint part again? > I haven't had occasion to apply the technique to anything more complex > than such an example, so I'll have to say "I don't know" to the 3D > example. > > TroyK
"paul c" <toledobysea@ac.ooyah> wrote in message news:oufBj.69267$pM4.44570@pd7urf1no... > David Cressey wrote: > ... > > > > I thought that the SET clause of the UPDATE statement was an assignment in > > disguise. > > ... > > I would have thought UPDATE itself is assignment in disguise and SET is > just an arbitrary language device, just as scope operators don't seem to > have much to do with database theory, ;). (I admit I have no idea > whether UPDATE without SET is allowed.) Yes, that's probably more precise than what I said. > Regarding SICP, I think somewhere it states that FP languages don't have > assignment, which I didn't see here, at least in the few OO cross-posts > that I accidentally read. > > > As usual when replying to David C, no offence but can't resist > mentioning that Date has claimed that the ACID property that SQL > vendors tout is basically incomplete (my words, not his), I think he > might have been including SQL's UPDATE when he said that. It's my understanding, just from folowing c.d.t. discussions, that Date & Darwen, in the D language, aim for something more ambitious than ACID, namely a language that permits all transactions to be expressed as a single action. If you do that, then you get all the benefits of ACID and maybe a few others.
![]() |
0 |
![]() |
On 2008-03-08 21:39:37 -0600, JOG <jog@cs.nott.ac.uk> said: > On Mar 9, 2:23 am, Robert Martin <uncle...@objectmentor.com> wrote: >> On 2008-03-06 06:37:19 -0600, JOG <j...@cs.nott.ac.uk> said: >> >>> On Mar 6, 6:26 am, Robert Martin <uncle...@objectmentor.com> wrote: >>>> That's not inferrence. >> >>> Well that all jolly-well looks like inference to me. >> >> I agree that you can infer what elements a subclass has by knowing the >> elements of the base class. > > Well then you have conceded defeat, given our conversation has gone as > follows: > > jog: "So why not treat all 'inheritance' in this way [as inference]?" > Robert: "Because all inheritance is not about inference." > jog: show me an example when it is not. > Robert: I'll ignore that, and just describe the OO mechanism again. > jog: Look, here is how everything you have listed could have been > described through inference > Robert: Ok, I agree that all inheritance can be described via > inference. Here's the starting point of the conversation. You wrote: >> Name(x, Aristotle) -> Species(x, Man) >> Species(x, Man) -> Mortality(x, Mortal) >> |= Name(x, Aristotle) -> Mortalilty(x, Mortal) > >> No types or reification in sight. Instead I had two groups of >> statements: >> People = {Name, Species, Bday} >> Entities = {Species, Mortality} > >> A join of the two statements gave me the inference I required: {Name, >> Mortality}. All of a sudden it seemed simple. So some questions: > >> 1) So why not treat all 'inheritance' in this way? >> 2) Could one extend to include 'behaviour' as well? >> 3) And is this a crazy thing to suggest in a cross post to an OO >> group? I responded to your first question that when you need to make inferences, an inference engine is a good tool. Inheritance is not an inference engine. And you challenged me: >> Interesting story. Yes, when you have a problem of inference, it's >> good to use an inference engine. >> >>> So some questions: >> >>> 1) So why not treat all 'inheritance' in this way? >> >> Because all inheritance is not about inference. > > Hmmm. Then might you give an example of a situation where inheritance > cannot be described in terms of inference? To which I described what inheritance is: > Inheritance is simply the redeclaration of functions and variables in a > subscope. That's not inferrence. To which you said "subscope" was a nonsense word, and we had to define our terms. (sigh). Then you said: > Well that all jolly-well looks like inference to me. And I said ... well you can read it above. > I guess we are done, your objection having been overcome. JOG, you're original question was: Why not treat all inheritance as the kind of inferrence you get from a database join. Answer: Because inheritance is not inference. Inheritance is the redeclaration of variables and functions in a subscope. > I think because of your OO focus you haven't > recognized that (in this case) there is a danger of putting the cart > before the horse. Logic first, mechanism second, not t'other way > around. On the contrary, your initial question was: Why can't the mechanism be treated like the logic. My answer was: because it's a mechanism. So it seems to me that you are the one with the problem discerning the difference between logic and mechanisms. Now, I understand your original point. You were trying to use inheritance as a way to make an inference, and then later found that a database join was a better approach. Good! Inference engines are usually better at making inferences than redeclaration mechanisms. But the conclusion you apparently drew from this is that there was something wrong with the mechanism (inheritance) and that it should be more like a database join. That's rather like pounding a nail with a screwdriver and then wishing it was more like a hammer. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-06 02:29:51 -0600, Marshall <marshall.spight@gmail.com> said: > On Mar 5, 10:32 pm, Robert Martin <uncle...@objectmentor.com> wrote: >> On 2008-03-05 09:48:45 -0600, Marshall <marshall.spi...@gmail.com> said: >>> On Mar 4, 11:05 pm, Robert Martin <uncle...@objectmentor.com> wrote: >> >>>>> Furthermore, since OOPLs lack physical independence, traversing >>>>> the graph may be quite expensive, particularly in the case where >>>>> the graph is backed by storage in a database, which is part of >>>>> why ORM is such a universally bad idea. >> >>>> No, you have this wrong. ORMs generally use standard SQL queries to >>>> traverse and gather data from the DB. Then that data is placed into OO >>>> structures so that the application can take advanage of the bias. >> >>> Just the fact that they use SQL isn't sufficient. They have to >>> use it as well as a person could, though an interface that >>> is generally information-lossy enough (or at least, used in >>> a lossy way) that that's impossible. >> >> Yeah, assembly language programmers used to say the same thing about >> compilers. Then the compilers started writing more efficient code than >> the assembly language programmers could... > > I give you high marks for rhetoric here. Excellently argued! > You take the opposing side and compare them to assembly, > and compare yourself with compiled languages. That the > situation is most closely analogous to exactly the reverse > is only relevant if one is interested in a deep understanding > doesn't detract at all from the rhetorical effectiveness. Wow, it's sure getting deep in here. I can't seem to make out the content for all the fuzz and snow. 1. ORMs generate SQL in a manner analagous to compilers generating assembly. Why is it analagous? Because the ORMs can infer a considerable amount of intent, and can therefore generate highly specific SQL. (The fact that many don't is irrelevant). This is just like compilers who infer intent from the code and generate highly specific and tuned assembler. 2. Compilers got so good at this that they generated more efficient (not better) assembler code than humans could (or would). The compiler had no care for art or readability. So the compiler did things that no human would dare. ORMs have the same opportunity. So, in fact, it is not a reverse analogy. It is a very appropriate analogy. The ORM lives at a higher level of abstraction than the SQL because it has access to application intent, that the SQL does not have. > As an actual engineering argument, though, this fails. > Because it doesn't address the information-loss point I made. > No code generator can write optimal code if it's missing information > necessary to determine what is optimal. Object-graph traversal > in ORMs is *necessarily* more expensive than straightforward SQL. You are all caught up on object graph traversals as though they were the only way to work with ORMs. Indeed, most of us silly and sloppy OO programmers understand that you don't want to walk unfetched object graphs. So we populate the necessary nodes in a set of efficient querries. > In part exactly because it is *necessarily* missing the information > present in the head of the programmer who writes instead a single > SQL statement, information that is then embodied in that statement. Since the programmer writes the object graph, he knows how to ensure that the best SQL gets written... One day, the ORM will likely infer this from the structure of the object graph and (like a jitter) from the way the application executes. >>> I have *often* seen four and five order of magnitude >>> performance difference between straight SQL and >>> ORM SQL, across a wide variety of ORMs. The >>> very idea of ORM demands it: you have to try to >>> push a whole set-oriented language through a functional >>> interface. >> >> Bah. You don't *have* to do any such thing.I won't argue that there >> aren't programmers and teams who use their tools poorly. > > Nice dodge. > > ORMs are a breeding ground for antipatterns. RELIGION! > OO code written > in the style you advocate (Employee.get("bob")) is a performance > disaster first, and benefit-free busy work last. Not in an application that only get's Bob. Clearly, in application that do more interesting things, we'd use more interesting constructs. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-06 14:54:27 -0600, topmind <topmind@technologist.com> said: > The industry is full of horror stories with regard to ORM mappers. The industry is full of horror stories with regard to SQL. So what? -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-06 08:46:57 -0600, "David Cressey" <cressey73@verizon.net> said: > I used to work in assembler, a long, long time ago. Me too. BAL from '69-71. Varian 620 from '71-'73. Then PDP-8, PDP-11, 8080, and 8086. The largest assembler program I worked on was 66,000 lines of code. The most challenging was 20,000 lines of 8080. I Finally started writing in C in 80. C++ in 90. Java in 98. C# in 2002, and Python and Ruby in 2005. Oh, and a gazilion other languages along the way like Smalltalk, Forth, Prolog, Snobol, Cobol, Fortran, PL/1, etc, etc. > -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-06 02:12:17 -0600, Marshall <marshall.spight@gmail.com> said: > On Mar 5, 10:40 pm, Robert Martin <uncle...@objectmentor.com> wrote: >> On 2008-03-05 14:10:11 -0600, "Dmitry A. Kazakov" >> >>>>>> <snip> >>>>>> You are confusing OO with static typing. In OO languages like Ruby, >>>>>> Python, or Smalltalk you can pass any object to any function >>>>>> irrespective of type. >> >>>>> Which is a bad idea. >> >>>> Why? >> >>> Because it is in fact untyped. >> >> No, it's just not statically typed. > > This issue is entirely tangential to the thread, but just FYI. > There is a strict, formal definition of "type" under which > languages like Python, Smalltalk, etc. are untyped. > This sense of the term is often favored in type theory, > and in fact is the one that introduced the word "type" to > mathematics. So let's go back to the thread. (up above this, I'm sure you can find it) I started talking about static and dynamic typing. Somebody said dynamic typing was bad. I asked why. They said it was because it was untyped. Now aside from the fact that the argument had, at that point, gone circular, I think the person who used the word 'un' was also imputing a pejorative meaning to it. So I corrected him. > In common parlance, however, untyped languages are > typically called "dynamically typed" in cases where they > employ a runtime tag system to classify values. Agreed. The term 'untyped', whether "formal" or not, is inaccurate when used to describe a dynamically typed language. Every value has a type. > > I propose the thread is already sufficiently contentious > without introducing further areas of controversy, such > as static typing vs. whatever you care to call the other thing. So then this post was an attempt to attenuate controversy? -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-06 03:39:54 -0600, S Perryman <q@q.net> said: > 1. Type inferencing is *strongly-typed* . I know, Steve. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-06 01:03:26 -0600, Marshall <marshall.spight@gmail.com> said: > On Mar 5, 10:48 pm, Robert Martin <uncle...@objectmentor.com> wrote: >> On 2008-03-05 09:58:43 -0600, Marshall <marshall.spi...@gmail.com> said: >> >>>> You are confusing OO with static typing. >> >>> How supremely annoying to have gone to some lengths >>> to carefully use the most strictly defined, modern type >>> system terminology, only to have it labeled as a novice >>> error by someone who missed my point entirely. >> >> How awful for you. > > I accept you apology. <grin> -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
"David Cressey" <cressey73@verizon.net> writes: > I don't think of 1971, when I wrote the garbage collector for MDL as "the > early days of garbage collection." The "early days would have been in the > 1950s when McCarthy wrote the garbage collector for LISP. The MDL garbage > collector was more ambitious than the Lisp garbage collector. But I had the > advantage of learning from the work that had already been done. apl also did new storage allocation on every assignment ... and did garbage collection & compaction when space in workspace was exhausted. the science center (4th flr of 545 tech sq) http://www.garlic.com/~lynn/subtopic.html#545tech ported apl\360 to cp67/cms ... and had to rework the garbage collection. typical apl\360 workspace was 16kbyte-32kbyte real storage .... and whole workspace was always swapped as single unit. cms allowed for multiple mbyte (virtual memory, paged) workspaces .... and the apl\360 garbage collection implementation resulted in severe page thrashing ... which had to be redone for virtual memory environment before release of cms\apl. http://www.garlic.com/~lynn/subtopic.html#hone part of work on apl virtual memory garbage collection used some application monitoring and modeling tools also done at the science center ... which was eventually released as vs/repack product in the mid-70s (included semi-automated program reorganization for virtual memory operation). vs/repack technology was used by a number of internal product groups (including varioud dbms like IMS) as part of transition from real storage to virtual storage environment. the followon to cp67, vm370 was used (originally on 370/145 in bldg. 28) for the original relational/sql implementation, system/r http://www.garlic.com/~lynn/subtopic.html#systemr
On Mar 10, 5:17 pm, Robert Martin <uncle...@objectmentor.com> wrote: > > Wow, it's sure getting deep in here. I can't seem to make out the > content for all the fuzz and snow. Indeed. Thread fatigue has set in. I suspect we are going to have to "agree to disagree" as they say, the same way we did when we had a big c.d.t/c.o. crosspost flamewar on the topic of ORMs, just like we did when we had the same conversation in 2007, and earlier in 2006 and 2005. I'll be brief: > So, in fact, it is not a reverse analogy. It is a very appropriate > analogy. The ORM lives at a higher level of abstraction than the SQL > because it has access to application intent, that the SQL does not have. I disagree on every particular. ORMs are decidedly lower level. And the reason they have written poor SQL historically, write poor SQL today, and always will write poor SQL is because they do not have access to application intent. I recognize that I am merely making assertions and not arguments at this point. > > ORMs are a breeding ground for antipatterns. > > RELIGION! Experience. Let me just put it this way. Over the last ten years I have evaluated something like 50 different ORMs. I have written at least three myself. I have participated in the industrial development of two, one at a smallish consulting company and one at a Fortune 100 company known for its engineering prowess. Further I have worked on several applications with hand-written "entity layers"; effectively these are manually-produced ad-hoc ORMs, sometimes running to a hundred thousand lines of code. I have yet to see even one ORM that didn't have egregious logical errors on page one, the sort that indicate that the designer didn't understand the basics of the relational model. I have yet to see even one of them that didn't make some architectural assumption that placed unnecessary if not ridiculous requirements on the schema. I have yet to see even one of them produce adequate performance under load. Bleah. I have more experiences to relate, but I have to go cook some rice. So, is a good ORM possible? I am convinced not. Do I have ironclad proof? I admit I don't. Alas, I shall have to make due with a highly predictive explanatory model and overwhelming supporting evidence merely. Marshall
Marshall wrote: .... > I disagree on every particular. ORMs are decidedly lower level. And > the reason they have written poor SQL historically, write poor SQL > today, and always will write poor SQL is because they do not have > access to application intent. > > I recognize that I am merely making assertions and not arguments > at this point. > ... Okay, here is a question: what is the algebra or calculus of OO-style objects?
![]() |
0 |
![]() |
On Mar 10, 5:47 pm, Robert Martin <uncle...@objectmentor.com> wrote: > > So then this post was an attempt to attenuate controversy? Yes. Not that I am necessarily against controversy, but perhaps one at a time is a good working limit. :-) Just as every superhero must have a nemesis, so must every newsgroup have a nemesis. And just as with superheroes, the hero/nemesis conflict is unresolvable, fundamental; it is inseparably entangled in the very identity of the participants. Only the destruction of one or the other can resolve the conflict. And of course, each participant self-identifies as the hero. Clearly comp.object and comp.databases.theory form just such a dyad, with data-centered vs. code-centered thinking the unresolvable heart of the conflict. ORMs are just flashpoints for the controversy. (ORMs being comparable to Gorilla Grodd's attempt to turn all the humans on Earth into gorillas, thwarted by the Justice League, thank goodness!) Another such dyad is comp.lang.functional and comp.lang.lisp. The unresolvable conflict between them is static vs. dynamic typing. Once a year or so, the groups break out in open hostility. Sometimes this is the result of a deliberate breaking of the cease fire by an embedded agent provocateur, (such as the last time Clan Object and Clan Database Theory fought) and sometimes it is by an innocent (or was he?!) crosspost by a unwitting noob, as in the current round of atrocities. Ugh, I'm boring even myself. It's time to play some Halo. If you like, reread this post in the voice of Comic Book Guy and see if it gets any funnier. No? Marshall
Marshall wrote: > On Mar 10, 5:47 pm, Robert Martin <uncle...@objectmentor.com> wrote: >> So then this post was an attempt to attenuate controversy? > > Yes. Not that I am necessarily against controversy, but > perhaps one at a time is a good working limit. :-) > > Just as every superhero must have a nemesis, > so must every newsgroup have a nemesis. And just as > with superheroes, the hero/nemesis conflict is unresolvable, > fundamental; it is inseparably entangled in the very identity > of the participants. Only the destruction of one or the other > can resolve the conflict. And of course, each participant > self-identifies as the hero. > > Clearly comp.object and comp.databases.theory form just > such a dyad, with data-centered vs. code-centered thinking > the unresolvable heart of the conflict. ORMs are just flashpoints > for the controversy. (ORMs being comparable to Gorilla Grodd's > attempt to turn all the humans on Earth into gorillas, thwarted > by the Justice League, thank goodness!) > > Another such dyad is comp.lang.functional and comp.lang.lisp. > The unresolvable conflict between them is static vs. dynamic > typing. Once a year or so, the groups break out in open hostility. > Sometimes this is the result of a deliberate breaking of the cease > fire by an embedded agent provocateur, (such as the last time > Clan Object and Clan Database Theory fought) and sometimes > it is by an innocent (or was he?!) crosspost by a unwitting noob, > as in the current round of atrocities. > > Ugh, I'm boring even myself. It's time to play some Halo. > If you like, reread this post in the voice of Comic Book Guy > and see if it gets any funnier. > > No? > > > Marshall I thought it was pretty funny. And accurate. But before you fatigue completely from the thread: Upthread, you mentioned you had gone to some length to use most strictly defined modern type system terminology. I don't suppose you have a cite for some of it? I'm clearly out of date. I tend to think of typing systems as falling somewhere on a four-dimensional space with the axes: Latent/Manifest Structural/Nominal Static/Dynamic Strong/Weak I wonder just how out of date I am.... Cheers, Joe
![]() |
0 |
![]() |
"JOG" <jog@cs.nott.ac.uk> wrote in message news:0cd61579-0f26-422c-9aec-908ffdea59ff@i7g2000prf.googlegroups.com... > On Mar 3, 2:07 pm, Thomas Gagne <tga...@wide-open-west.com> wrote: >> All attempts by applications to access a DB's tables and columns >> directly violates design principles that guard against close-coupling. >> This is a basic design tenet for OO. Violating it when jumping from OO >> to RDB is, I think, the source of problem that are collectively and >> popularly referred to as the object-relational impedance mismatch. > > I wondered if we might be able to come up with some agreement on what > object-relational impedence mismatch actually means. I always thought > the mismatch was centred on the issue that a single object != single > tuple, but it appears there may be more to it than that. > > I was hoping perhaps people might be able to offer perspectives on the > issues that they have encountered. One thing I would like to avoid > (outside of almost flames of course), is the notion that database > technology is merely a persistence layer (do people still actually > think that?) - I wonder if the 'mismatch' stems from such a > perspective. I think the mismatch has nothing to do with either. The relational camp is focused on "what;" the object camp is focused on "how." The relational camp seeks to determine whether a database state is possible; the object camp seeks to specify possible changes of state. Where the impedance comes in is that there may be multiple paths from one possible database state to another. The focus of a programmer is to get from point A to point B. On the one hand, you can state which path you're going to take; on the other hand, you can specify which paths are valid. But just because you take a particular path doesn't necessarily mean that another path is not just as valid: there may be more than one valid path from point A to point B. But OO doesn't require that every valid path be specified--only that the paths specified be valid paths; neither does the RM have a mechanism to detect which path was taken. (Candidate keys do not necessarily rigidly designate.) This is the essence of the object-relational impedance mismatch. There would be no impedance if there were a mechanism in the RM to detect which path was taken during a transition; there would be no impedance if OO required specification of /all/ possible paths. If it were possible to detect which path was taken, then it should be possible to determine the set of all valid paths; if all valid paths were specified, it should be possible to determine which path is being taken.
On Mar 10, 8:55 pm, Joe Thurbon <use...@thurbon.com> wrote: > Marshall wrote: > > > > Ugh, I'm boring even myself. It's time to play some Halo. > > I thought it was pretty funny. > > And accurate. Thanks. I should mention that I had an awesome flag capture. Most of the enemy team was occupied midfield, and they left their base empty. I sailed in, grabbed the flag, and hit the man-cannon. Midflight I hear a teammate say on the comm "We'll be waiting for you." And flying up to the drop point, I see a Warthog with driver and gunner, right there. I hop in the passenger seat with the flag and we hightail it back to our base. As we're almost there I hear the unmistakable beep-beep of an enemy lock on our vehicle; I immediately hop out just seconds before missiles destroy the 'hog and the other two guys, but I made it out in time and scored. I play under the tag "Natural Join". > But before you fatigue completely from the thread: > > Upthread, you mentioned you had gone to some length to use most strictly > defined modern type system terminology. I don't suppose you have a cite > for some of it? I'm clearly out of date. http://www.amazon.com/Types-Programming-Languages-Benjamin-Pierce/dp/0262162091 Pierce seems to be the guy everyone's citing these days, at least for definitions and pithy summaries of the field. > I tend to think of typing systems as falling somewhere on a > four-dimensional space with the axes: > > Latent/Manifest > Structural/Nominal > Static/Dynamic > Strong/Weak Well, strong/weak has been deprecated pretty hard. There's this term "safe" now, to describe what C isn't. "Latent" seems to mostly be a synonym for "dynamic" now. There's also the "explicitly annotated" term, to describe what languages like C++ and Java are, and OCaml and Haskell aren't. > I wonder just how out of date I am.... If you understand structural vs. nominal, you're way ahead of most people. Marshall
![]() |
0 |
![]() |
Marshall wrote: > On Mar 10, 8:55 pm, Joe Thurbon <use...@thurbon.com> wrote: >> Marshall wrote: [...] > I immediately hop out > just seconds before missiles destroy the 'hog and > the other two guys, but I made it out in time and > scored. > Played. > I play under the tag "Natural Join". I used to play Bolo. I played as Fetid Dingo's Kidney. My play was no better than my name. > > >> But before you fatigue completely from the thread: >> >> Upthread, you mentioned you had gone to some length to use most strictly >> defined modern type system terminology. I don't suppose you have a cite >> for some of it? I'm clearly out of date. > > http://www.amazon.com/Types-Programming-Languages-Benjamin-Pierce/dp/0262162091 > Thanks for that. I'll have a look. > Pierce seems to be the guy everyone's citing these days, > at least for definitions and pithy summaries of the field. > > >> I tend to think of typing systems as falling somewhere on a >> four-dimensional space with the axes: >> >> Latent/Manifest >> Structural/Nominal >> Static/Dynamic >> Strong/Weak > > Well, strong/weak has been deprecated pretty hard. Not hard enough, I think (and I say that not knowing how hard you mean.) From my perspective, weak only really ever seemed to have assembly and C in it, anyway. > There's > this term "safe" now, to describe what C isn't. That was 'strong' for me. > "Latent" seems > to mostly be a synonym for "dynamic" now. Oh. Wow. I used "latent" for things like Ocaml, where types are inferred, as well as things like Ruby where references are un-typed but objects are typed. > There's also the "explicitly annotated" term, > to describe what languages like C++ and Java are, and OCaml and > Haskell aren't. My "manifest". > > >> I wonder just how out of date I am.... > > If you understand structural vs. nominal, you're way ahead > of most people. Woot! Anyway, back to my personal Java hell ... Cheers, Joe
![]() |
0 |
![]() |
Marshall <marshall.spight@gmail.com> wrote in news:026db19e-4064-4b37- a98e-0826695ed5a7@s8g2000prg.googlegroups.com: > On Mar 10, 10:47 am, "Yagotta B. Kidding" <y...@mymail.com> wrote: >> > Mr. Perryman's example was written in a single-assignment, >> > mathematical style. >> >> And yet you still do not see where the assignment comes in in Mr. >> Perryman's example. > > You understand that single-assignment is different from > assignment, right? You know that single-assignment doesn't > cause any problems with referential transparency, right? I readily admit that I did not pay attention to the 'single' word until you've pointed out. I must point out that no one except wikipedia or some misguided souls use the single 'assignment' term any more. Now, the accepted term is name binding or let binding to make the point than no assignment in the imperative sense takes place. You wont find the term in either Haskell or OCAML manuals btw. Just to prevent a potential mud slinging contest, nobody disputes the fact that the mongrel language OCAML does have the real assignment. Nevertheless, it is possible to express Mr. Perryman's projection with OCAML's structural subtyping and let binding, that much is true, but the attempt would look butt ugly. It is worth to remind perhaps that let binding is an FP feature unavailable in any major OOP language. A question arises, then, why bother and not go for a real thing, FP ? > > Well, we're into a subpoint of a subpoint of a subpoint. > Perhaps a refreshing Snapple is in order! > > > Marshall >
Robert Martin wrote: > On 2008-03-06 02:29:51 -0600, Marshall <marshall.spight@gmail.com> said: > > > On Mar 5, 10:32 pm, Robert Martin <uncle...@objectmentor.com> wrote: > >> On 2008-03-05 09:48:45 -0600, Marshall <marshall.spi...@gmail.com> said= : > >>> On Mar 4, 11:05 pm, Robert Martin <uncle...@objectmentor.com> wrote: > >> > >>>>> Furthermore, since OOPLs lack physical independence, traversing > >>>>> the graph may be quite expensive, particularly in the case where > >>>>> the graph is backed by storage in a database, which is part of > >>>>> why ORM is such a universally bad idea. > >> > >>>> No, you have this wrong. ORMs generally use standard SQL queries to > >>>> traverse and gather data from the DB. Then that data is placed into = OO > >>>> structures so that the application can take advanage of the bias. > >> > >>> Just the fact that they use SQL isn't sufficient. They have to > >>> use it as well as a person could, though an interface that > >>> is generally information-lossy enough (or at least, used in > >>> a lossy way) that that's impossible. > >> > >> Yeah, assembly language programmers used to say the same thing about > >> compilers. Then the compilers started writing more efficient code than= > >> the assembly language programmers could... > > > > I give you high marks for rhetoric here. Excellently argued! > > You take the opposing side and compare them to assembly, > > and compare yourself with compiled languages. That the > > situation is most closely analogous to exactly the reverse > > is only relevant if one is interested in a deep understanding > > doesn't detract at all from the rhetorical effectiveness. > > Wow, it's sure getting deep in here. I can't seem to make out the > content for all the fuzz and snow. > > 1. ORMs generate SQL in a manner analagous to compilers generating > assembly. Why is it analagous? Because the ORMs can infer a > considerable amount of intent, and can therefore generate highly > specific SQL. (The fact that many don't is irrelevant). This is just > like compilers who infer intent from the code and generate highly > specific and tuned assembler. That is a huge stretch. SQL is *not* low-level. I will agree there are areas for improvement, but that's true with *any* language. What ORM's do is more like translating Smalltalk to Python (or visa versa) because Python fans don't want to deal with Smalltalk per dogma or personal preference. > > 2. Compilers got so good at this that they generated more efficient > (not better) assembler code than humans could (or would). The compiler > had no care for art or readability. So the compiler did things that no > human would dare. ORMs have the same opportunity. The RDBMS already has capabilities to select optimum (or better) execution paths. If the ORM improves it even more, then good for it. Often, however, that is not the case. > > So, in fact, it is not a reverse analogy. It is a very appropriate > analogy. The ORM lives at a higher level of abstraction than the SQL > because it has access to application intent, that the SQL does not have. You attribute magic qualities to ORM's that they don't deserve. ORM's are complex tools that require experts to use effectively and still are a source for a lot of headaches and performance bottlenecks. I've read a *lot* of gripes about ORM's on the web. > > > As an actual engineering argument, though, this fails. > > Because it doesn't address the information-loss point I made. > > No code generator can write optimal code if it's missing information > > necessary to determine what is optimal. Object-graph traversal > > in ORMs is *necessarily* more expensive than straightforward SQL. > > You are all caught up on object graph traversals as though they were > the only way to work with ORMs. Indeed, most of us silly and sloppy OO > programmers understand that you don't want to walk unfetched object > graphs. So we populate the necessary nodes in a set of efficient > querries. Why not compare master SQL programmers with master OO'ers instead of sub-par SQL'ers with master OO'ers? > > > In part exactly because it is *necessarily* missing the information > > present in the head of the programmer who writes instead a single > > SQL statement, information that is then embodied in that statement. > > Since the programmer writes the object graph, he knows how to ensure > that the best SQL gets written... One day, the ORM will likely infer > this from the structure of the object graph and (like a jitter) from > the way the application executes. MindReader 2.0 > -- > Robert C. Martin (Uncle Bob)=EF=BF=BD=EF=BF=BD| email: unclebob@objectment= or.com > Object Mentor Inc.=EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF= =BD=EF=BF=BD| blog:=EF=BF=BD=EF=BF=BDwww.butunclebob.com -T-
Robert Martin wrote: > On 2008-03-06 14:54:27 -0600, topmind <topmind@technologist.com> said: > > > The industry is full of horror stories with regard to ORM mappers. > > The industry is full of horror stories with regard to SQL. So what? Using ORM's effectively requires both decent knowledge of SQL/RDBMS and the ORM tool. Using SQL/RDBMS effectively only requires the first. If the ORM expert doesn't know SQL/RDBMS sufficiently, then they cannot troubleshoot issues properly. ORM's are a jobs program. (May be good for a paycheck, but is otherwise an unnecessary expense to a company.) > -- > Robert C. Martin (Uncle Bob)=EF=BF=BD=EF=BF=BD| email: unclebob@objectment= or.com > Object Mentor Inc.=EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF= =BD=EF=BF=BD| blog:=EF=BF=BD=EF=BF=BDwww.butunclebob.com -T-
On Mar 10, 4:53=A0pm, Bob Badour <bbad...@pei.sympatico.ca> wrote: > TroyK wrote: > > On Mar 8, 7:09 am, rp...@pcwin518.campus.tue.nl (rpost) wrote: > > >>TroyK =A0wrote: > > >>>Who said anything about stored procedures? I'm talking about > >>>implementing the business rules via constraint declaration in the > >>>database, and deriving new values throught the application of SQL > >>>queries. > > >>Do you also treat dynamic constraints in this way? > > >>E.g. "unless otherwise specified, an employee shall get a 2% > >>salary increase after each full year of employment". > > with not_excluded_ee =3D ( > =A0 employees join ( employees{ee#} minus excluded_ees{ee#} ) > ), increases =3D extend ( > =A0 =A0not_excluded_ee where is_anniversary_period(pay_period(now),hire_da= te) > ) add increase =3D salary * percent(2) > ... > > >>Or: "two movable 3D shapes shall never collide; collision is avoided > >>by the following repositioning algorithm that is guaranteed > >>to pull all objects apart that get too close: (...), however, this > >>algorithm may be replaced with another that has the same property". > > with other_objects =3D objects rename all prepending "other_" > , collisions =3D objects join other_objects > =A0 =A0where rank_attrib < other_rank_attrib > =A0 =A0 =A0and min_distance(polygons,other_polygons) < threshold_distance > , while exists(collisions) { > =A0 =A0 =A0update collisions > =A0 =A0 =A0set position =3D ... > =A0 =A0 =A0, other_position =3D ... > > } > > >>-- > >>Reinier > > > Inferring the definition of "dynamic constraint" from your examples, > > in the first, I think that it would be sufficient to model and record > > whatever attribute means "otherwise specified" and include that as > > part of the condition in the constraint declaration. > > What was the constraint part again? > I suppose given a db that maintains some temporal dimension for the employee facts ("history table" or somesuch) would have a constraint that the tuples within it must be derived using your expression above. Absent that, the expression itself constrains the values that can be derived. Is that a fair statement? > > > I haven't had occasion to apply the technique to anything more complex > > than such an example, so I'll have to say "I don't know" to the 3D > > example. > Nice sketch of the 3D solution, Bob. I've set aside some time to play with SQL Server's new spatial datatypes (Geography and Geometry), but this cut of the feature only supports 2D. TroyK
topmind wrote: > > Robert Martin wrote: > >>On 2008-03-06 02:29:51 -0600, Marshall <marshall.spight@gmail.com> said: >> >> >>>On Mar 5, 10:32 pm, Robert Martin <uncle...@objectmentor.com> wrote: >>> >>>>On 2008-03-05 09:48:45 -0600, Marshall <marshall.spi...@gmail.com> said: >>>> >>>>>On Mar 4, 11:05 pm, Robert Martin <uncle...@objectmentor.com> wrote: >>>> >>>>>>>Furthermore, since OOPLs lack physical independence, traversing >>>>>>>the graph may be quite expensive, particularly in the case where >>>>>>>the graph is backed by storage in a database, which is part of >>>>>>>why ORM is such a universally bad idea. >>>> >>>>>>No, you have this wrong. ORMs generally use standard SQL queries to >>>>>>traverse and gather data from the DB. Then that data is placed into OO >>>>>>structures so that the application can take advanage of the bias. >>>> >>>>>Just the fact that they use SQL isn't sufficient. They have to >>>>>use it as well as a person could, though an interface that >>>>>is generally information-lossy enough (or at least, used in >>>>>a lossy way) that that's impossible. >>>> >>>>Yeah, assembly language programmers used to say the same thing about >>>>compilers. Then the compilers started writing more efficient code than >>>>the assembly language programmers could... >>> >>>I give you high marks for rhetoric here. Excellently argued! >>>You take the opposing side and compare them to assembly, >>>and compare yourself with compiled languages. That the >>>situation is most closely analogous to exactly the reverse >>>is only relevant if one is interested in a deep understanding >>>doesn't detract at all from the rhetorical effectiveness. >> >>Wow, it's sure getting deep in here. I can't seem to make out the >>content for all the fuzz and snow. >> >>1. ORMs generate SQL in a manner analagous to compilers generating >>assembly. Why is it analagous? Because the ORMs can infer a >>considerable amount of intent, and can therefore generate highly >>specific SQL. (The fact that many don't is irrelevant). This is just >>like compilers who infer intent from the code and generate highly >>specific and tuned assembler. > > > That is a huge stretch. SQL is *not* low-level. I will agree there > are areas for improvement, but that's true with *any* language. What > ORM's do is more like translating Smalltalk to Python (or visa versa) > because Python fans don't want to deal with Smalltalk per dogma or > personal preference. > > >>2. Compilers got so good at this that they generated more efficient >>(not better) assembler code than humans could (or would). The compiler >>had no care for art or readability. So the compiler did things that no >>human would dare. ORMs have the same opportunity. > > > The RDBMS already has capabilities to select optimum (or better) > execution paths. If the ORM improves it even more, then good for it. > Often, however, that is not the case. > > >>So, in fact, it is not a reverse analogy. It is a very appropriate >>analogy. The ORM lives at a higher level of abstraction than the SQL >>because it has access to application intent, that the SQL does not have. > > > You attribute magic qualities to ORM's that they don't deserve. ORM's > are complex tools that require experts to use effectively and still > are a source for a lot of headaches and performance bottlenecks. I've > read a *lot* of gripes about ORM's on the web. > > >>>As an actual engineering argument, though, this fails. >>>Because it doesn't address the information-loss point I made. >>>No code generator can write optimal code if it's missing information >>>necessary to determine what is optimal. Object-graph traversal >>>in ORMs is *necessarily* more expensive than straightforward SQL. >> >>You are all caught up on object graph traversals as though they were >>the only way to work with ORMs. Indeed, most of us silly and sloppy OO >>programmers understand that you don't want to walk unfetched object >>graphs. So we populate the necessary nodes in a set of efficient >>querries. > > > Why not compare master SQL programmers with master OO'ers instead of > sub-par SQL'ers with master OO'ers? Better yet, just listen to those of us who are both already.
TroyK wrote: > On Mar 10, 4:53 pm, Bob Badour <bbad...@pei.sympatico.ca> wrote: > >>TroyK wrote: >> >>>On Mar 8, 7:09 am, rp...@pcwin518.campus.tue.nl (rpost) wrote: >> >>>>TroyK wrote: >> >>>>>Who said anything about stored procedures? I'm talking about >>>>>implementing the business rules via constraint declaration in the >>>>>database, and deriving new values throught the application of SQL >>>>>queries. >> >>>>Do you also treat dynamic constraints in this way? >> >>>>E.g. "unless otherwise specified, an employee shall get a 2% >>>>salary increase after each full year of employment". >> >>with not_excluded_ee = ( >> employees join ( employees{ee#} minus excluded_ees{ee#} ) >>), increases = extend ( >> not_excluded_ee where is_anniversary_period(pay_period(now),hire_date) >>) add increase = salary * percent(2) >>... >> >> >>>>Or: "two movable 3D shapes shall never collide; collision is avoided >>>>by the following repositioning algorithm that is guaranteed >>>>to pull all objects apart that get too close: (...), however, this >>>>algorithm may be replaced with another that has the same property". >> >>with other_objects = objects rename all prepending "other_" >>, collisions = objects join other_objects >> where rank_attrib < other_rank_attrib >> and min_distance(polygons,other_polygons) < threshold_distance >>, while exists(collisions) { >> update collisions >> set position = ... >> , other_position = ... >> >>} >> >>>>-- >>>>Reinier >> >>>Inferring the definition of "dynamic constraint" from your examples, >>>in the first, I think that it would be sufficient to model and record >>>whatever attribute means "otherwise specified" and include that as >>>part of the condition in the constraint declaration. >> >>What was the constraint part again? > > I suppose given a db that maintains some temporal dimension for the > employee facts ("history table" or somesuch) would have a constraint > that the tuples within it must be derived using your expression above. > Absent that, the expression itself constrains the values that can be > derived. Is that a fair statement? Oh, you mean a constraint like: ( ee# in excluded_ees{ee#} ) or ( not is_anniversary_period(pay_period(now),hire_date) ) or ( salary >= previous_salary * percent(1.02) ) What's so hard about that? Note, I used >= to allow for other coincidental raises. >>>I haven't had occasion to apply the technique to anything more complex >>>than such an example, so I'll have to say "I don't know" to the 3D >>>example. > > Nice sketch of the 3D solution, Bob. I've set aside some time to play > with SQL Server's new spatial datatypes (Geography and Geometry), but > this cut of the feature only supports 2D. Thank you. I hope you have fun with the spatial datatypes, but I wish you had a nicer language than SQL for playing with them.
-snippage- > Oh, you mean a constraint like: > > ( ee# in excluded_ees{ee#} ) > or ( not is_anniversary_period(pay_period(now),hire_date) ) > or ( salary >= previous_salary * percent(1.02) ) > > What's so hard about that? Nothing - That's been my point :) Much easier to do this stuff using relational (or, heck, even SQL) than by using "objects". TroyK
TroyK wrote: > -snippage- > >>Oh, you mean a constraint like: >> >>( ee# in excluded_ees{ee#} ) >>or ( not is_anniversary_period(pay_period(now),hire_date) ) >>or ( salary >= previous_salary * percent(1.02) ) >> >>What's so hard about that? > > Nothing - That's been my point :) Much easier to do this stuff using > relational (or, heck, even SQL) than by using "objects". Oops, there was a typo mistake in it. I guess avoiding those is the hard part.
"Bob Badour" <bbadour@pei.sympatico.ca> wrote in message news:47d6bced$0$4064$9a566e8b@news.aliant.net... > topmind wrote: > >> >> Robert Martin wrote: >> >>>On 2008-03-06 02:29:51 -0600, Marshall <marshall.spight@gmail.com> said: >>> >>> >>>>On Mar 5, 10:32 pm, Robert Martin <uncle...@objectmentor.com> wrote: >>>> >>>>>On 2008-03-05 09:48:45 -0600, Marshall <marshall.spi...@gmail.com> >>>>>said: >>>>> >>>>>>On Mar 4, 11:05 pm, Robert Martin <uncle...@objectmentor.com> wrote: >>>>> >>>>>>>>Furthermore, since OOPLs lack physical independence, traversing >>>>>>>>the graph may be quite expensive, particularly in the case where >>>>>>>>the graph is backed by storage in a database, which is part of >>>>>>>>why ORM is such a universally bad idea. >>>>> >>>>>>>No, you have this wrong. ORMs generally use standard SQL queries to >>>>>>>traverse and gather data from the DB. Then that data is placed into >>>>>>>OO >>>>>>>structures so that the application can take advanage of the bias. >>>>> >>>>>>Just the fact that they use SQL isn't sufficient. They have to >>>>>>use it as well as a person could, though an interface that >>>>>>is generally information-lossy enough (or at least, used in >>>>>>a lossy way) that that's impossible. >>>>> >>>>>Yeah, assembly language programmers used to say the same thing about >>>>>compilers. Then the compilers started writing more efficient code than >>>>>the assembly language programmers could... >>>> >>>>I give you high marks for rhetoric here. Excellently argued! >>>>You take the opposing side and compare them to assembly, >>>>and compare yourself with compiled languages. That the >>>>situation is most closely analogous to exactly the reverse >>>>is only relevant if one is interested in a deep understanding >>>>doesn't detract at all from the rhetorical effectiveness. >>> >>>Wow, it's sure getting deep in here. I can't seem to make out the >>>content for all the fuzz and snow. >>> >>>1. ORMs generate SQL in a manner analagous to compilers generating >>>assembly. Why is it analagous? Because the ORMs can infer a >>>considerable amount of intent, and can therefore generate highly >>>specific SQL. (The fact that many don't is irrelevant). This is just >>>like compilers who infer intent from the code and generate highly >>>specific and tuned assembler. >> >> >> That is a huge stretch. SQL is *not* low-level. I will agree there >> are areas for improvement, but that's true with *any* language. What >> ORM's do is more like translating Smalltalk to Python (or visa versa) >> because Python fans don't want to deal with Smalltalk per dogma or >> personal preference. >> >> >>>2. Compilers got so good at this that they generated more efficient >>>(not better) assembler code than humans could (or would). The compiler >>>had no care for art or readability. So the compiler did things that no >>>human would dare. ORMs have the same opportunity. >> >> >> The RDBMS already has capabilities to select optimum (or better) >> execution paths. If the ORM improves it even more, then good for it. >> Often, however, that is not the case. >> >> >>>So, in fact, it is not a reverse analogy. It is a very appropriate >>>analogy. The ORM lives at a higher level of abstraction than the SQL >>>because it has access to application intent, that the SQL does not have. >> >> >> You attribute magic qualities to ORM's that they don't deserve. ORM's >> are complex tools that require experts to use effectively and still >> are a source for a lot of headaches and performance bottlenecks. I've >> read a *lot* of gripes about ORM's on the web. >> >> >>>>As an actual engineering argument, though, this fails. >>>>Because it doesn't address the information-loss point I made. >>>>No code generator can write optimal code if it's missing information >>>>necessary to determine what is optimal. Object-graph traversal >>>>in ORMs is *necessarily* more expensive than straightforward SQL. >>> >>>You are all caught up on object graph traversals as though they were >>>the only way to work with ORMs. Indeed, most of us silly and sloppy OO >>>programmers understand that you don't want to walk unfetched object >>>graphs. So we populate the necessary nodes in a set of efficient >>>querries. >> >> >> Why not compare master SQL programmers with master OO'ers instead of >> sub-par SQL'ers with master OO'ers? > > Better yet, just listen to those of us who are both already. Mastery is evident in those who have attained it. Fortunately, most can recognize those who self-aggrandize as the charlatans, pretenders and frauds that they are.
On 2008-03-10 23:28:40 -0500, Marshall <marshall.spight@gmail.com> said: > Clearly comp.object and comp.databases.theory form just > such a dyad, with data-centered vs. code-centered thinking > the unresolvable heart of the conflict. So, in this entire discussion, have you seen me attack database theory even once? -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On Mar 11, 6:25 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-10 23:28:40 -0500, Marshall <marshall.spi...@gmail.com> said: > > > Clearly comp.object and comp.databases.theory form just > > such a dyad, with data-centered vs. code-centered thinking > > the unresolvable heart of the conflict. > > So, in this entire discussion, have you seen me attack database theory > even once? Offhand, I would suggest that it's probably not a good idea to pay too much attention to any post of mine in which I mention Gorilla Grodd. And I say that despite being absolutely convinced that Powers Boothe did a great job with the voice during that whole arc where Grodd and Lex Luthor (voiced by Clancy Brown) wrestle over control of the Legion of Doom during season five. Okay, "attack?" I dunno. I believe that the policies you advocate more or less reduce the DBMS to a dumb record-store. I've wasted a lot of my life explaining why, so no redux tonight. Hey, that Justice League show is awesome for trivia, though. Clancy Brown, for example. The voice of Lex Luthor. Did you know he also does the voice for Mr. Krabs on Spongebob AND he played Rawhide in "Buckaroo Banzai" AND was the villian in "Highlander." Isn't that hilarious? Wasn't the evil Kurgan in Highlander awesome? Then we have this weird episode in which the Flash and Lex Luthor end up changing bodies. On the Justice League, Flash is voiced by Michael Rosenbaum, who of course everyone knows plays Lex Luthor on Smallville. So once they changed bodies, you have Michael Rosenbaum playing ... Lex Luthor on Justice League! I thought it was an amusing joke. And then also you have Carl Lumbly, who plays the Martian Manhunter on Justice League, and he was ALSO in Buckaroo Banzai. He was one of the Johns. John Parker I think. So there's two people who voice major characters in Justice League who were in Buckaroo Banzai. This one time, I'm in the editing booth, (the director had asked me over for some some advice on some effects shots) and the director's cell phone rings, and he says he has to take the call and he pops up and he's gone for five minutes or so. When he comes back, he tells me about the call. He says something like, you maybe don't know who this is, but I just talked with Carl Lumbly. And I said Carl Lumbly! Of course I know who Carl Lumbly is. I just watched him beat the crap out of Edward James Olmos a couple of weeks ago. You see, Lumbly had been the guest star in an episode of BattleStar Galactica. He played a pilot who had been lost to the Cylons many years before, and somehow escapes. He got in a huge fight with Adama. His character's name was ... M something. He was a lieutenant. M? I can't remember. Novacek! That was it: lieutenant Novacek. Where did that "M" come from? What's wrong with my memory? Marshall
Marshall wrote: > On Mar 6, 9:27 am, S Perryman <q...@q.com> wrote: >>You regard "X-oriented" as something for which the facilities to support >>X are provided at a fundamental level (like arithmetic ops, CAR/CDR in >>Lisp etc) and not built (by whoever) from the fundamental constructs of >>a prog lang. >>Fair enough. > Right. If we consider *potential* built constructs, and we are > speaking of general purpose programming languages, then > right away all languages collapse into a single category. > We might make some allowances for languages whose standard > library contains specific constructs, however. We might reasonably > classify some functional languages that make extensive use of > map, filter, and fold as being at least modestly list-oriented, > despite the fact that their list constructs are really recursively > defined union types. But it's probably better to consider the > recursively defined type together with the higher-order function > constructs as being the axiomatic signature of such languages. I wouldn't consider it so, but it does no harm in debate to do so. > Another important reason to consider the fundamental constructs > of a PL is so that we can treat the PL as an axiomatic system. > To me, the most interesting thing one can do with code is > *reason* about it. This is why type systems are interesting > (and "dynamically typed" languages not so much.) It's also > part of why the relational model is so interesting. And it's > why OOPLs don't hold my interest so much any more; > their axioms are complicated and often weak. Which "axioms" in particular cause you pain ?? M>Virtually no languages have primitive support for anything like a M>collection. SQL and SETL and a few others; that's it. There are M>some languages that were designed from the start with *list* M>processing in mind: lisp (and I should probably also mention the M>APL family here.) There are some *very* interesting things M>in there, but not things I would say could be strictly described M>as set-oriented. >>I take a slightly different view in that I obviously have the need >>for various collection types, and support for a Relational "calculus" >>to use on those collections. I also need support for collections of >>ADTs in particular. > The collections question is quite interesting: Lists, maps, bags, > sets, tables, trees. It is apparent after only a modest amount > of study that maps, sets, and tables are thoroughly and beautifully > handled with relations. Collections should be deliberately vague. Insert and remove. Empty. Contains an item, how many occurrences of an item etc. A set is a specific form of collection (contains 0 or 1 instance of any given element, insertion of a contained element has no effect etc) . And so on. > It took me a lot longer but I've reached > the conclusion that lists fall in to the same category. Sequences are conceptually a partial mapping of (Index,Element) tuples. Certain mappings being undefined (hence partial) . > Trees no longer seem like a single category to me: there > are what I call "statically structured" trees, for example > Customers/ Invoices/InvoiceLineItems, and "dynamically > structured" trees, such as a parse tree. A tree is a specific form of graph. Graphs are conceptually two sets with an invariant relationship between the sets. Trees merely define additional invariants on those sets. > Statically structured > trees are are a particular strong point for SQL and also a > strong point for the RM. Dynamically structured trees *can be* > handled with the RM but I don't think it does as good a job > as is done in, say, FP languages with union types and > structural recursion. I conjecture that it may be possible > to develop best practices and/or tiny extensions to the > RM such that it does as good a job, but currently I'm > leaning in the direction of thinking this will not be possible. > I'm not completely thrilled with structural recursion, but > I have yet to see anything better. Is your issue with performance for things like breadth/depth traversal etc ?? >>If possible I would like the prog lang user to be able to construct >>specific collections such as sets etc, and for the prog lang env to be >>within reasonable performance of something that is designed for the >>support of some one specific aspect (as Lisp is for lists etc) . >>I think that FP is the paradigm that could possibly do this. > An important aspect of the performance requirement is physical > independence, something that is largely ignored in PL theory, > sadly. The separation of specification (interface) from implementation is something key to ADT theory. And for those prog langs (CLU, Modula-2 etc) that support one impl of each ADT per program unit, you have the independence. However, the coming of OO (unintentionally) threw a spanner in the works. It became possible to have multiple *different* impls of one ADT used *concurrently* in the same program unit. A rough analogy (that AFAIK is not supported in commercial systems) would be an SQL database that for some given table operates a B-tree for certain records in the table, hashing for others etc. And expects the performance for ops on the entire table to be the same whatever predicates are applied to the table. > If we look at the mathematical universe, we usually encounter > an extensional viewpoint. And if something is expressed > intensionally, or algorithmically, mathematicians are free to > immediately think of its extension, because they do not have > to limit themselves to what is computable. > On the other hand, if we look at the computable universe, we > see more often the intensional viewpoint. And we are not > free to immediately shift into extensional mode, because > of the possibly-infinite, almost-certainly-prohibitive cost of > doing so. > The RM gives the best handle on the computably extensional > viewpoint I have encountered. FP gives the best handle > on the intensional viewpoint. I would say that FP has a strong extensional viewpoint too, solely because it too is based on mathematical concepts (functions) . Regards, Steven Perryman
![]() |
0 |
![]() |
On Mar 11, 1:05 am, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-08 21:39:37 -0600, JOG <j...@cs.nott.ac.uk> said: > > > > > On Mar 9, 2:23 am, Robert Martin <uncle...@objectmentor.com> wrote: > >> On 2008-03-06 06:37:19 -0600, JOG <j...@cs.nott.ac.uk> said: > > >>> On Mar 6, 6:26 am, Robert Martin <uncle...@objectmentor.com> wrote: > >>>> That's not inferrence. > > >>> Well that all jolly-well looks like inference to me. > > >> I agree that you can infer what elements a subclass has by knowing the > >> elements of the base class. > > > Well then you have conceded defeat, given our conversation has gone as > > follows: > > > jog: "So why not treat all 'inheritance' in this way [as inference]?" > > Robert: "Because all inheritance is not about inference." > > jog: show me an example when it is not. > > Robert: I'll ignore that, and just describe the OO mechanism again. > > jog: Look, here is how everything you have listed could have been > > described through inference > > Robert: Ok, I agree that all inheritance can be described via > > inference. > > Here's the starting point of the conversation. You wrote: > > > > >> Name(x, Aristotle) -> Species(x, Man) > >> Species(x, Man) -> Mortality(x, Mortal) > >> |= Name(x, Aristotle) -> Mortalilty(x, Mortal) > > >> No types or reification in sight. Instead I had two groups of > >> statements: > >> People = {Name, Species, Bday} > >> Entities = {Species, Mortality} > > >> A join of the two statements gave me the inference I required: {Name, > >> Mortality}. All of a sudden it seemed simple. So some questions: > > >> 1) So why not treat all 'inheritance' in this way? > >> 2) Could one extend to include 'behaviour' as well? > >> 3) And is this a crazy thing to suggest in a cross post to an OO > >> group? > > I responded to your first question that when you need to make > inferences, an inference engine is a good tool. Inheritance is not an > inference engine. A red herring as far as I'm concerned this Robert - after all RM is not an "inference engine" either. What I am questioning whether we need the concept of inheritance /whatsoever/. It does not exist in logic, it has no underlying theoretical justification, and is purely an ad hoc mechanism thrown together at xerox parc. Is it not true that inheritance has lost favour over the years - composition is generally preferred, unless one is defining interfaces (and whether that should still be called "inheritance" is open to debate). > And you challenged me: > > >> Interesting story. Yes, when you have a problem of inference, it's > >> good to use an inference engine. > > >>> So some questions: > > >>> 1) So why not treat all 'inheritance' in this way? > > >> Because all inheritance is not about inference. > > > Hmmm. Then might you give an example of a situation where inheritance > > cannot be described in terms of inference? > > To which I described what inheritance is: > > > Inheritance is simply the redeclaration of functions and variables in a > > subscope. That's not inferrence. Yes, but I was asking for an example of when the /result/ of inheritance could not be described by inference. You answered a different question (albeit with the best of intentions I am sure). I discuss this further on. > > To which you said "subscope" was a nonsense word, and we had to define > our terms. (sigh). I believed the word was nonsense in terms of relevancy to what I had asked, not in terms of it having no definition (so 'sigh' right back at you sunshine). Lets put it down as standard usenet fare and move on. > > Then you said: > > > Well that all jolly-well looks like inference to me. > > And I said ... well you can read it above. > > > I guess we are done, your objection having been overcome. > > JOG, you're original question was: Why not treat all inheritance as the > kind of inferrence you get from a database join. Yes, I think you have missed my gist though. The question is why use a "Redeclaration mechanism" (if you want to call it that) to handle data transformation at all. An answer of "that's just how OO does it" is not really what I was interested in exploring. > > Answer: Because inheritance is not inference. Inheritance is the > redeclaration of variables and functions in a subscope. > > > I think because of your OO focus you haven't > > recognized that (in this case) there is a danger of putting the cart > > before the horse. Logic first, mechanism second, not t'other way > > around. > > On the contrary, your initial question was: > > Why can't the mechanism be treated like the logic. That was not my question at all. While you are referring to inheritance as a mechanism, I am referring to what is /achieved/ via the mechanism. Saying "redeclaration in a subscope" is no answer to what I intended - it is like answering the question "why do we eat" with "to put food into our stomachs". Perhaps we are speaking across each other because you are focussing on the T and not the I of IT (this sadly proliferates the field, so I'm not intending to be disparaging to you personally, but rather to help conversation). Information is the subject matter of our area, and that is where we should start, not trying to squash it into whatever hole are particular favourite tehcnology is. If noone the importance of anlaysing the nature of information, we'd still all be using flipping CODASYL right? So, I challenge you to step back from an individual mechanism and look at data and behaviour theoretically. I contend that if we can do that, we will find there is in fact some middle ground in between OO and the prinicples behind relational theory, where the conceputual layer (object boundaries and behvaiours) can be decoupled from data (the logical layer) all in the same language. > > My answer was: because it's a mechanism. > > So it seems to me that you are the one with the problem discerning the > difference between logic and mechanisms. > > Now, I understand your original point. You were trying to use > inheritance as a way to make an inference, and then later found that a > database join was a better approach. Good! Inference engines are > usually better at making inferences than redeclaration mechanisms. But > the conclusion you apparently drew from this is that there was > something wrong with the mechanism (inheritance) and that it should be > more like a database join. That's rather like pounding a nail with a > screwdriver and then wishing it was more like a hammer. > > -- > Robert C. Martin (Uncle Bob) | email: uncle...@objectmentor.com > Object Mentor Inc. | blog: www.butunclebob.com > The Agile Transition Experts | web: www.objectmentor.com > 800-338-6716 |
JOG wrote: >>On 2008-03-08 21:39:37 -0600, JOG <j...@cs.nott.ac.uk> said: > A red herring as far as I'm concerned this Robert - after all RM is > not an "inference engine" either. What I am questioning whether we > need the concept of inheritance /whatsoever/. It does not exist in > logic, it has no underlying theoretical justification, and is purely > an ad hoc mechanism thrown together at xerox parc. 1. Devised at the NCC in Norway, not Xerox PARC. 2. Devised because of the influence of academic work on data types (Hoares' "record" types) , and noticing things having related properties/behaviours in simulation systems. So not really ad-hoc (thought went into providing the scheme) . > Is it not true that > inheritance has lost favour over the years - composition is generally > preferred, unless one is defining interfaces (and whether that should > still be called "inheritance" is open to debate). As a property acquisition/composition scheme, certainly. As a type substitutability mechanism, (sadly) no (Java, C# etc) . Regards, Steven Perryman
![]() |
0 |
![]() |
On Mar 12, 9:34 am, S Perryman <q...@q.com> wrote: > JOG wrote: > >>On 2008-03-08 21:39:37 -0600, JOG <j...@cs.nott.ac.uk> said: > > A red herring as far as I'm concerned this Robert - after all RM is > > not an "inference engine" either. What I am questioning whether we > > need the concept of inheritance /whatsoever/. It does not exist in > > logic, it has no underlying theoretical justification, and is purely > > an ad hoc mechanism thrown together at xerox parc. > > 1. Devised at the NCC in Norway, not Xerox PARC. Much obliged. > > 2. Devised because of the influence of academic work on data types (Hoares' > "record" types) , and noticing things having related properties/behaviours > in simulation systems. > > So not really ad-hoc (thought went into providing the scheme) . Remember that there is a huge amount of 'academic work' on XML - that doesn't make XML any the less ad-hoc, or without sound theoretical foundation. But again thanks for the info, and if you have any links to the things you site, I'd be interested to having a look. > > > Is it not true that > > inheritance has lost favour over the years - composition is generally > > preferred, unless one is defining interfaces (and whether that should > > still be called "inheritance" is open to debate). > > As a property acquisition/composition scheme, certainly. > As a type substitutability mechanism, (sadly) no (Java, C# etc) . > > Regards, > Steven Perryman
"Robert Martin" <unclebob@objectmentor.com> wrote in message news:200803112125087826-unclebob@objectmentorcom... > On 2008-03-10 23:28:40 -0500, Marshall <marshall.spight@gmail.com> said: > > > Clearly comp.object and comp.databases.theory form just > > such a dyad, with data-centered vs. code-centered thinking > > the unresolvable heart of the conflict. > > So, in this entire discussion, have you seen me attack database theory > even once? No, but I have seen you be dismissive of ideas that someone capable of a data centered viewpoint would probably not have dismissed.
JOG wrote: > On Mar 12, 9:34 am, S Perryman <q...@q.com> wrote: >>JOG wrote: >>>A red herring as far as I'm concerned this Robert - after all RM is >>>not an "inference engine" either. What I am questioning whether we >>>need the concept of inheritance /whatsoever/. It does not exist in >>>logic, it has no underlying theoretical justification, and is purely >>>an ad hoc mechanism thrown together at xerox parc. >>2. Devised because of the influence of academic work on data types (Hoares' >>"record" types) , and noticing things having related properties/behaviours >>in simulation systems. >>So not really ad-hoc (thought went into providing the scheme) . > Remember that there is a huge amount of 'academic work' on XML - that > doesn't make XML any the less ad-hoc, or without sound theoretical > foundation. Perhaps so. But for the things we are discussing, these people and their work was the *genesis* . > But again thanks for the info, and if you have any links > to the things you site, I'd be interested to having a look. A search on simula + hoare + record, seems to yield a fair amount of sources. Regards, Steven Perryman
![]() |
0 |
![]() |
David Cressey wrote: > "Robert Martin" <unclebob@objectmentor.com> wrote in message > news:200803112125087826-unclebob@objectmentorcom... > >>On 2008-03-10 23:28:40 -0500, Marshall <marshall.spight@gmail.com> said: >> >> >>>Clearly comp.object and comp.databases.theory form just >>>such a dyad, with data-centered vs. code-centered thinking >>>the unresolvable heart of the conflict. >> >>So, in this entire discussion, have you seen me attack database theory >>even once? > > No, but I have seen you be dismissive of ideas that someone capable of a > data centered viewpoint would probably not have dismissed. And we have seen him prove his ignorance beyond any doubt while promoting himself.
On Mar 11, 7:25 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-10 23:28:40 -0500, Marshall <marshall.spi...@gmail.com> said: > > > Clearly comp.object and comp.databases.theory form just > > such a dyad, with data-centered vs. code-centered thinking > > the unresolvable heart of the conflict. > > So, in this entire discussion, have you seen me attack database theory > even once? You "downplay" RDBMS. You compared queries to "assembler language" in one reply, for example. While not a direct "attack" it is the next nearest thing. > -- > Robert C. Martin (Uncle Bob) | email: uncle...@objectmentor.com > Object Mentor Inc. | blog: www.butunclebob.com -T-
S Perryman wrote: > JOG wrote: > > >>On 2008-03-08 21:39:37 -0600, JOG <j...@cs.nott.ac.uk> said: > > > A red herring as far as I'm concerned this Robert - after all RM is > > not an "inference engine" either. What I am questioning whether we > > need the concept of inheritance /whatsoever/. It does not exist in > > logic, it has no underlying theoretical justification, and is purely > > an ad hoc mechanism thrown together at xerox parc. > > 1. Devised at the NCC in Norway, not Xerox PARC. > > 2. Devised because of the influence of academic work on data types (Hoares' > "record" types) , and noticing things having related properties/behaviours > in simulation systems. "Types" tend to rely on similar hierarchical taxonomies (or at least DAG taxonomies) that inheritance does, and *suffer similar problems*. It is difficult to reduce most non-trivial real-world things into such trees/dags because they generally don't fit such, especially over the longer run. Even numbers, the poster child of "types", tend to get ugly if try to create a tree taxonomy with them. Feature sets are a more flexible and natural way to represent and manage variations-on-a- theme. (Disclaimer: I have no objective metrics to measure "more natural" and "flexible" at the moment.) > > So not really ad-hoc (thought went into providing the scheme) . > > > > Is it not true that > > inheritance has lost favour over the years - composition is generally > > preferred, unless one is defining interfaces (and whether that should > > still be called "inheritance" is open to debate). > > As a property acquisition/composition scheme, certainly. > As a type substitutability mechanism, (sadly) no (Java, C# etc) . > > > Regards, > Steven Perryman -T-
topmind wrote: > S Perryman wrote: >>JOG wrote: >>>A red herring as far as I'm concerned this Robert - after all RM is >>>not an "inference engine" either. What I am questioning whether we >>>need the concept of inheritance /whatsoever/. It does not exist in >>>logic, it has no underlying theoretical justification, and is purely >>>an ad hoc mechanism thrown together at xerox parc. >>1. Devised at the NCC in Norway, not Xerox PARC. >>2. Devised because of the influence of academic work on data types (Hoares' >>"record" types) , and noticing things having related properties/behaviours >>in simulation systems. > "Types" tend to rely on similar hierarchical taxonomies (or at least > DAG taxonomies) that inheritance does, and *suffer similar problems*. > It is difficult to reduce most non-trivial real-world things into such > trees/dags because they generally don't fit such, especially over the > longer run. Even numbers, the poster child of "types", tend to get > ugly if try to create a tree taxonomy with them. Feature sets are a > more flexible and natural way to represent and manage variations-on-a- > theme. (Disclaimer: I have no objective metrics to measure "more > natural" and "flexible" at the moment.) Your rantings : 1. pollute my pleasant experience of recent debate with people who actually know something about database fundamentals, and have contributions related to other areas 2. are off-topic rubbish 3. demonstrate a complete ignorance of anything relating to type theory in programming languages So on all counts: on your way, little boy ...
![]() |
0 |
![]() |
topmind wrote: > On Mar 11, 7:25 pm, Robert Martin <uncle...@objectmentor.com> wrote: > >>On 2008-03-10 23:28:40 -0500, Marshall <marshall.spi...@gmail.com> said: >> >>>Clearly comp.object and comp.databases.theory form just >>>such a dyad, with data-centered vs. code-centered thinking >>>the unresolvable heart of the conflict. >> >>So, in this entire discussion, have you seen me attack database theory >>even once? > > You "downplay" RDBMS. You compared queries to "assembler language" in > one reply, for example. While not a direct "attack" it is the next > nearest thing. It's also incontrovertible proof of his profound ignorance.
On Mar 12, 12:54 am, S Perryman <q...@q.com> wrote: > Marshall wrote: > > > > It's also > > part of why the relational model is so interesting. And it's > > why OOPLs don't hold my interest so much any more; > > their axioms are complicated and often weak. > > Which "axioms" in particular cause you pain ?? The ones they leave out. The ones that would let me draw stronger conclusions about my code. Have you read, say, the Java language spec? It's very long and detailed. It's also a prose spec. So it's ambiguous in places, and there aren't any proofs done against it. At the same time, the guarantees you can make with the Java type system are useful, but they don't go nearly far enough. A goal of mine is to be able to take the "typeful programming" idea to its logical conclusion. > >>I take a slightly different view in that I obviously have the need > >>for various collection types, and support for a Relational "calculus" > >>to use on those collections. I also need support for collections of > >>ADTs in particular. > > The collections question is quite interesting: Lists, maps, bags, > > sets, tables, trees. It is apparent after only a modest amount > > of study that maps, sets, and tables are thoroughly and beautifully > > handled with relations. > > Collections should be deliberately vague. That is the current consensus in much of the PLT world. Consider STL. All kinda collections; all kinda algorithms. Certainly better than nothing! But still a long way from as good as can be done. The "vague collections" idea is not a good one IMHO. Because you can get more or less everything you want with relations. (Surprise! I like relations! :-) > Insert and remove. Empty. Contains an item, how many occurrences of > an item etc. > > A set is a specific form of collection (contains 0 or 1 instance of > any given element, insertion of a contained element has no effect etc) . What I consider ideal would be to have an axiomatic algebra that can succinctly express the complete set of such operations, and furthermore exactly specify the algebraic properties of such operations. This enables not only a lot of automatic optimization, but it also enables a lot more automated reasoning. (Actually optimization is just a special case of automated reasoning.) > > It took me a lot longer but I've reached > > the conclusion that lists fall in to the same category. > > Sequences are conceptually a partial mapping of (Index,Element) tuples. > Certain mappings being undefined (hence partial) . Exactly. A partial mapping. Aka a relation. > > Trees no longer seem like a single category to me: there > > are what I call "statically structured" trees, for example > > Customers/ Invoices/InvoiceLineItems, and "dynamically > > structured" trees, such as a parse tree. > > A tree is a specific form of graph. > Graphs are conceptually two sets with an invariant relationship > between the sets. Trees merely define additional invariants on > those sets. Yes. I still think my static/dynamic structure distinction is an important one. > > Statically structured > > trees are are a particular strong point for SQL and also a > > strong point for the RM. Dynamically structured trees *can be* > > handled with the RM but I don't think it does as good a job > > as is done in, say, FP languages with union types and > > structural recursion. I conjecture that it may be possible > > to develop best practices and/or tiny extensions to the > > RM such that it does as good a job, but currently I'm > > leaning in the direction of thinking this will not be possible. > > I'm not completely thrilled with structural recursion, but > > I have yet to see anything better. > > Is your issue with performance for things like breadth/depth > traversal etc ?? Your question caused me to consider my feelings for structural recursion, and I didn't really come up with much to say against it. I guess the worst I can say is, I don't see how to get a handle on how to transform them. You know what 1NF is, the form in which the system requires that attributes of relations have types that are not themselves relations? This turns out to be sufficient for (virtually) everything. However I don't think it's the best design choice. The problem that results when you relax 1NF, though, is a psychological one: everyone wants to start nesting everything deeply. And that's a disaster. > >>If possible I would like the prog lang user to be able to construct > >>specific collections such as sets etc, and for the prog lang env to be > >>within reasonable performance of something that is designed for the > >>support of some one specific aspect (as Lisp is for lists etc) . > >>I think that FP is the paradigm that could possibly do this. > > An important aspect of the performance requirement is physical > > independence, something that is largely ignored in PL theory, > > sadly. > > The separation of specification (interface) from implementation is > something key to ADT theory. And for those prog langs (CLU, Modula-2 > etc) that support one impl of each ADT per program unit, you have > the independence. > > However, the coming of OO (unintentionally) threw a spanner in the > works. It became possible to have multiple *different* impls of one ADT > used *concurrently* in the same program unit. I don't see that there's anything wrong with that necessarily. This is a version of physical independence, and that's a good thing. The problem comes more with the fact that objects are stateful; their status as first class variables is what is problematic. ("State is hell." --Ken Arnold.) And stateful objects aren't even necessarily a problem per se. See for example Peter Van Roy's demonstration of objects as mechanisms for increasing modularity in ways that are unavailable to FP or LP languages. Or consider that closures are isomorphic to objects in languages with (mutable local variables + lexical scope + first class functions.) Some have even made the case that threads are isomorphic to objects. No, the problem comes up with the psychological factor again. Once you admit stateful objects, people use them freaking everywhere, and having state sprinkled like little grains of sand all through the gears of your program, THAT's a problem. (Although I believe that message passing (not the Smalltalk sense of the word) may be able to reproduce the modularity advantage Van Roy describes, without the disadvantages of local state.) > A rough analogy (that AFAIK is not supported in commercial systems) > would be an SQL database that for some given table operates a B-tree > for certain records in the table, hashing for others etc. And expects > the performance for ops on the entire table to be the same whatever > predicates are applied to the table. Well, "expectation" is an attribute of people, not of programs. People can be educated. Performance tuning necessarily involves human input, because performance requirements are not automatically inferable by software. We have operation A and operation B, and it may be that the requirements are that A has to be as fast as possible and if that's at the expense of B, then so be it. Or vice versa. Or the requirements may say to balance them. How can the system know which way to optimize for? It needs to be told the requirements. HOWEVER, once it knows them, obviously the most desirable case is that it does the tuning itself. So what we really want is a system that lets us specify our performance requirements declaratively. And at this point, Bob can pipe up and point out that the best candidate for a system that will be able to support the sort of features I'm describing is a relational system. (Oops, I guess he doesn't have to now.) > > If we look at the mathematical universe, we usually encounter > > an extensional viewpoint. And if something is expressed > > intensionally, or algorithmically, mathematicians are free to > > immediately think of its extension, because they do not have > > to limit themselves to what is computable. > > On the other hand, if we look at the computable universe, we > > see more often the intensional viewpoint. And we are not > > free to immediately shift into extensional mode, because > > of the possibly-infinite, almost-certainly-prohibitive cost of > > doing so. > > The RM gives the best handle on the computably extensional > > viewpoint I have encountered. FP gives the best handle > > on the intensional viewpoint. > > I would say that FP has a strong extensional viewpoint too, solely > because it too is based on mathematical concepts (functions) . Sure. I think it's just as fair to say that the RM has a strong intensional viewpoint, too: SELECT CustomerId, InvoiceId from Invoices WHERE TotalCost > 100.00 See the lambda in there? It's camouflaged, but it's still there. Hiding behind a rock; clever thing. Marshall
S Perryman wrote: > topmind wrote: > > > S Perryman wrote: > > >>JOG wrote: > > >>>A red herring as far as I'm concerned this Robert - after all RM is > >>>not an "inference engine" either. What I am questioning whether we > >>>need the concept of inheritance /whatsoever/. It does not exist in > >>>logic, it has no underlying theoretical justification, and is purely > >>>an ad hoc mechanism thrown together at xerox parc. > > >>1. Devised at the NCC in Norway, not Xerox PARC. > > >>2. Devised because of the influence of academic work on data types (Hoares' > >>"record" types) , and noticing things having related properties/behaviours > >>in simulation systems. > > > "Types" tend to rely on similar hierarchical taxonomies (or at least > > DAG taxonomies) that inheritance does, and *suffer similar problems*. > > It is difficult to reduce most non-trivial real-world things into such > > trees/dags because they generally don't fit such, especially over the > > longer run. Even numbers, the poster child of "types", tend to get > > ugly if try to create a tree taxonomy with them. Feature sets are a > > more flexible and natural way to represent and manage variations-on-a- > > theme. (Disclaimer: I have no objective metrics to measure "more > > natural" and "flexible" at the moment.) > > Your rantings : > > 1. pollute my pleasant experience of recent debate with people who actually > know something about database fundamentals, and have contributions > related to other areas Did I say anything objectively wrong? > > 2. are off-topic rubbish I disagree it is "off-topic". > > 3. demonstrate a complete ignorance of anything relating to type theory in > programming languages Did I say anything objectively wrong? Or is this Soviet Justice? > > > So on all counts: on your way, little boy ... Net-etiquette dictates you simply ignore replies you don't like rather than call people names. Why not spend time coding up realistic examples of types curing cancer and saving puppies instead of spending time insulting people? Show gizmos being good instead of claiming people are bad. -T-
topmind wrote: > S Perryman wrote: TM>"Types" tend to rely on similar hierarchical taxonomies (or at least TM>DAG taxonomies) that inheritance does, and *suffer similar problems*. TM>It is difficult to reduce most non-trivial real-world things into such TM>trees/dags because they generally don't fit such, especially over the TM>longer run. Even numbers, the poster child of "types", tend to get TM>ugly if try to create a tree taxonomy with them. Feature sets are a TM>more flexible and natural way to represent and manage variations-on-a- TM>theme. (Disclaimer: I have no objective metrics to measure "more TM>natural" and "flexible" at the moment.) >>Your rantings : >>1. pollute my pleasant experience of recent debate with people who actually >> know something about database fundamentals, and have contributions >> related to other areas > Did I say anything objectively wrong? Yes. Types do *not* "rely on similar hierarchical taxonomies (or at least DAG taxonomies)" . #1 If this is the case, then "objectively" show us why this is so. >>2. are off-topic rubbish > I disagree it is "off-topic". Just to educate you before I send you on your way, you non english- understanding muppet : JOG made a statement about *who* and *what* made inheritance come to be in OO. I corrected him on both matters. Please feel free to show us how your silly rant contributes >>3. demonstrate a complete ignorance of anything relating to type theory in >> programming languages > Did I say anything objectively wrong? We await your reply to #1 with interest. Steven Perryman
![]() |
0 |
![]() |
S Perryman wrote: > topmind wrote: > > > S Perryman wrote: > > TM>"Types" tend to rely on similar hierarchical taxonomies (or at least > TM>DAG taxonomies) that inheritance does, and *suffer similar problems*. > TM>It is difficult to reduce most non-trivial real-world things into such > TM>trees/dags because they generally don't fit such, especially over the > TM>longer run. Even numbers, the poster child of "types", tend to get > TM>ugly if try to create a tree taxonomy with them. Feature sets are a > TM>more flexible and natural way to represent and manage variations-on-a- > TM>theme. (Disclaimer: I have no objective metrics to measure "more > TM>natural" and "flexible" at the moment.) > > >>Your rantings : > > >>1. pollute my pleasant experience of recent debate with people who actually > >> know something about database fundamentals, and have contributions > >> related to other areas > > > Did I say anything objectively wrong? > > Yes. > > Types do *not* "rely on similar hierarchical taxonomies (or at least > DAG taxonomies)" . I never said they "must", only "tend to". You misrepresented me. Gee, that's new. > > >>2. are off-topic rubbish > > > I disagree it is "off-topic". > > Just to educate you before I send you on your way, you non english- > understanding muppet : "Muppet"? You're weird. > > JOG made a statement about *who* and *what* made inheritance come to be > in OO. I corrected him on both matters. > > Please feel free to show us how your silly rant contributes This is where "types" were mentioned: > 2. Devised because of the influence of academic work on data types (Hoares' > "record" types) , and noticing things having related properties/behaviours > in simulation systems. And in the message just before that, JOG stated: "What I am questioning whether we need the concept of inheritance /whatsoever/." > >>3. demonstrate a complete ignorance of anything relating to type theory in > >> programming languages > > > Did I say anything objectively wrong? > > We await your reply to #1 with interest. > > > Steven Perryman -T-
topmind wrote: > S Perryman wrote: TM>Did I say anything objectively wrong? >>Yes. >>Types do *not* "rely on similar hierarchical taxonomies (or at least >>DAG taxonomies)" . > I never said they "must", only "tend to". Who said "must" ?? Not me, pal. What is the word for that ... ?? Mis-representation, perhaps. So getting back to it : Feel free to "objectively" show us why types *tend* to "rely on similar hierarchical taxonomies (or at least DAG taxonomies)" . > You misrepresented me. Gee, that's new. We can always rely on you to kill yourself with your own sword as quickly as the next line of text. You never disappoint (LOL) . >>JOG made a statement about *who* and *what* made inheritance come to be >>in OO. I corrected him on both matters. >>Please feel free to show us how your silly rant contributes > This is where "types" were mentioned: >>2. Devised because of the influence of academic work on data types (Hoares' >>"record" types) , and noticing things having related properties/behaviours >>in simulation systems. > And in the message just before that, JOG stated: > "What I am questioning whether we > need the concept of inheritance /whatsoever/." LOL !!! Classic topmind muppetry. Posting rants to the wrong person altogether. Rather than admitting the embarrassing truth, then tries to selectively edit the entire posting to prevaricate. So what do we have : - you are claiming you are writing about something that was *not even present* in my posting - a rant that *does not even relate* to that text anyway Funny isn't it, that JOG in his reply had no problems understanding what I was telling him (or replying accordingly) . So, to send you on your way, you non english-understanding muppet ... Here is the ref to my posting (in its *entirety* ) to JOG : http://groups.google.com/group/comp.databases.theory/msg/a25c4a9a21982df6 Please feel free to show us how your silly rant contributes to anything in that posting. Just to help you, there are two points of note in the posting, denoted "1" and "2" . 1 tells JOG *who* invented inheritance in OOP. 2 tells JOG *why* (according to the inventors) inheritance came to be. >>>Did I say anything objectively wrong? >>We await your reply to #1 with interest. Ah, the other classic topmind muppetry : demand something "objective" of someone, but scuttles away quietly into a dark corner when the same is demanded of him. Steven Perryman
![]() |
0 |
![]() |
On 2008-03-12 04:21:53 -0500, JOG <jog@cs.nott.ac.uk> said: > What I am questioning whether we > need the concept of inheritance /whatsoever/. Is inheritance necessary? Certainly not. Is it useful? Certainly. > It does not exist in > logic, it has no underlying theoretical justification, and is purely > an ad hoc mechanism thrown together at xerox parc. I think it was "thrown together" in Norway by Dahl and Nygaard. I could be wrong. But so what? It's a useful tool. > Is it not true that > inheritance has lost favour over the years - Overuse, yes. Not use. > composition is generally preferred, Generally, yes. > unless one is defining interfaces (and whether that should > still be called "inheritance" is open to debate). Inheritance is the redeclaration of functions and variables within a subscope. That fits rather well with the abstract methods of an interface. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-12 05:35:35 -0500, "David Cressey" <cressey73@verizon.net> said: > > "Robert Martin" <unclebob@objectmentor.com> wrote in message > news:200803112125087826-unclebob@objectmentorcom... >> On 2008-03-10 23:28:40 -0500, Marshall <marshall.spight@gmail.com> said: >> >>> Clearly comp.object and comp.databases.theory form just >>> such a dyad, with data-centered vs. code-centered thinking >>> the unresolvable heart of the conflict. >> >> So, in this entire discussion, have you seen me attack database theory >> even once? > > No, but I have seen you be dismissive of ideas that someone capable of a > data centered viewpoint would probably not have dismissed. That's a fair point. I can't point to any incident, but I think it's likely. However, I can also say that by participating in these discussions I have learned to be a lot less dismissive, and have gained much more respect for the views expressed on c.d.t (though not for some of the bad odor'd participants). -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
On 2008-03-12 10:11:54 -0500, topmind <topmind@technologist.com> said: > You "downplay" RDBMS. Over the years I have learned to do that a lot less. There was a time that I considered DBs to be "a bucket of bits". Thanks, in part, to these crossposted discussions between c.o and c.d.t I have come to have a very different opinion. > You compared queries to "assembler language" in > one reply, for example. While not a direct "attack" it is the next > nearest thing. I can undertand why someone might take offense at that, if one thought that there was something awful about assembly language. The real point of that remark was that the user of a tool is at a higher level of abstraction than the tool itself. SQL is a tool. ORMs are tools that use SQL to get their job done, just like compilers use assembly to get their job done. In that sense ORMs live at a higher level of abstraction than SQL. The members of c.d.t. might respond negatively to that idea because they see SQL as a better vehicle to do the job that the ORM is trying to do. That's fine, but does not change the fact that the ORM is using SQL as an implementation language. -- Robert C. Martin (Uncle Bob)��| email: unclebob@objectmentor.com Object Mentor Inc.� � � � � ��| blog:��www.butunclebob.com The Agile Transition Experts��| web:���www.objectmentor.com 800-338-6716� � � � � � � � ��|
Robert Martin wrote: > On 2008-03-12 10:11:54 -0500, topmind <topmind@technologist.com> said: > > > You "downplay" RDBMS. > > Over the years I have learned to do that a lot less. There was a time > that I considered DBs to be "a bucket of bits". Thanks, in part, to > these crossposted discussions between c.o and c.d.t I have come to have > a very different opinion. > > > You compared queries to "assembler language" in > > one reply, for example. While not a direct "attack" it is the next > > nearest thing. > > I can undertand why someone might take offense at that, if one thought > that there was something awful about assembly language. It is very rarely one's first choice if machine speed is not the main requirements factor. > > The real point of that remark was that the user of a tool is at a > higher level of abstraction than the tool itself. SQL is a tool. ORMs > are tools that use SQL to get their job done, just like compilers use > assembly to get their job done. In that sense ORMs live at a higher > level of abstraction than SQL. Often it seems that you and other OO proponents are more interested in seeing objects and only objects rather than using the best abstraction. You automatically equate "using objects" with "higher abstraction". (And all the repetitious set/get's and new new's make it almost seem like OO assembler.) Even if there was a slight difference in abstraction level, it is often not great enough to bother translating back and forth. Your "PARADIGM TRANSLATION TAX" is rather high, uncomfortably so. (And ORM are far from seemless or simple, raising more questions than they solve.) > > The members of c.d.t. might respond negatively to that idea because > they see SQL as a better vehicle to do the job that the ORM is trying > to do. That's fine, but does not change the fact that the ORM is using > SQL as an implementation language. I could use Java to impliment machine language and machine language to impliment Java. Does that tell us anything useful? > > > -- > Robert C. Martin (Uncle Bob)=EF=BF=BD=EF=BF=BD| email: unclebob@objectment= or.com > Object Mentor Inc.=EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF=BD =EF=BF= =BD=EF=BF=BD| blog:=EF=BF=BD=EF=BF=BDwww.butunclebob.com -T-
S Perryman wrote: > topmind wrote: > > > S Perryman wrote: > > TM>Did I say anything objectively wrong? > > >>Yes. > > >>Types do *not* "rely on similar hierarchical taxonomies (or at least > >>DAG taxonomies)" . > > > I never said they "must", only "tend to". > > Who said "must" ?? Not me, pal. > What is the word for that ... ?? Mis-representation, perhaps. Let me rephrase the question. What SPECIFICLY did I say about "types" that is objectively wrong? > > So getting back to it : > > Feel free to "objectively" show us why types *tend* to "rely on similar > hierarchical taxonomies (or at least DAG taxonomies)" . I don't know "why", for I didn't design the common languages that tend to have dag/tree-based types. You'll have to ask Gosling etc. that. > >>JOG made a statement about *who* and *what* made inheritance come to be > >>in OO. I corrected him on both matters. > > >>Please feel free to show us how your silly rant contributes > > > This is where "types" were mentioned: > > >>2. Devised because of the influence of academic work on data types (Hoares' > >>"record" types) , and noticing things having related properties/behaviours > >>in simulation systems. > > > And in the message just before that, JOG stated: > > > "What I am questioning whether we > > need the concept of inheritance /whatsoever/." > > LOL !!! Classic topmind muppetry. > > Posting rants to the wrong person altogether. Even if true, that does not make it "off topic". It's only one reply level away. > Rather than admitting the embarrassing truth, then tries to selectively > edit the entire posting to prevaricate. > > So what do we have : > > - you are claiming you are writing about something that was *not even > present* in my posting I was trying to guess what you implied. You create vaguery and then blame me when I try to clean it up by paraphrasing you with more precision. Typical. > > - a rant that *does not even relate* to that text anyway > > > Funny isn't it, that JOG in his reply had no problems understanding > what I was telling him (or replying accordingly) . And this relates to what? [snapped longer he-said-she-said bickery] > Steven Perryman -T-
topmind wrote: > S Perryman wrote: > Let me rephrase the question. What SPECIFICLY did I say about "types" > that is objectively wrong? For the umpteenth time : <quote> types *tend* to "rely on similar hierarchical taxonomies (or at least DAG taxonomies) </quote> And for the umpteenth time, feel free to "objectively" show us why your quoted text is true (as opposed to being the rantings of an idiot) . >>So getting back to it : >>Feel free to "objectively" show us why types *tend* to "rely on similar >>hierarchical taxonomies (or at least DAG taxonomies)" . > I don't know "why", for I didn't design the common languages that tend > to have dag/tree-based types. You'll have to ask Gosling etc. that. Hang on. You have made a specific claim in your rant about "types" . You cannot support how or why that claim is anything other than rubbish. And then you have the gall to try to blame OO prog language designers as to why you cannot support your claim. Consider yourself promoted to uber-muppet. >>LOL !!! Classic topmind muppetry. >>Posting rants to the wrong person altogether. > Even if true, that does not make it "off topic". It's only one reply > level away. 1. Wrong. Completely unrelated to anything I specifically posted in reply to JOG. 2. Is it true (rhetorical question) ?? >>Rather than admitting the embarrassing truth, then tries to selectively >>edit the entire posting to prevaricate. >>So what do we have : >>- you are claiming you are writing about something that was *not even >> present* in my posting > I was trying to guess what you implied. You create vaguery and then > blame me when I try to clean it up by paraphrasing you with more > precision. Typical. ROFTLMAO. How can informing someone as to who invented inheritance in OOP, and the reasons why, be "vaguery" ?? Please feel free to tell us. >>Funny isn't it, that JOG in his reply had no problems understanding >>what I was telling him (or replying accordingly) . > And this relates to what? The fact that he can read and understand the English language, and actually track and reply to the correct posting in a Usenet thread. Which you evidently cannot. So, once again : On your way, you non english-understanding muppet ...
![]() |
0 |
![]() |
S Perryman wrote: > topmind wrote: > > > S Perryman wrote: > > > Let me rephrase the question. What SPECIFICLY did I say about "types" > > that is objectively wrong? > > For the umpteenth time : > > <quote> > > types *tend* to "rely on similar > hierarchical taxonomies (or at least DAG taxonomies) How many popular languages can you name that DON'T rely on trees or DAGS for type matching and equivalency detection? And that claim is not about types, but rather *usage* of types. > >>Posting rants to the wrong person altogether. > > > Even if true, that does not make it "off topic". It's only one reply > > level away. > > 1. Wrong. > > Completely unrelated to anything I specifically posted in reply to JOG. Either way "off topic" is an exaggeration. Don't be a Drama Queen. [el snippo] > > > 2. Is it true (rhetorical question) ?? > > > >>Rather than admitting the embarrassing truth, then tries to selectively > >>edit the entire posting to prevaricate. > > >>So what do we have : > > >>- you are claiming you are writing about something that was *not even > >> present* in my posting > > > I was trying to guess what you implied. You create vaguery and then > > blame me when I try to clean it up by paraphrasing you with more > > precision. Typical. > > ROFTLMAO. > > How can informing someone as to who invented inheritance in OOP, and the > reasons why, be "vaguery" ?? > > Please feel free to tell us. I was addressing the issue raised by non-me of the utility of inheritance, not its invention. I don't care if Kermit the Frog invented it. -T-
On 13 Mar, 18:40, Robert Martin <uncle...@objectmentor.com> wrote: > The real point of that remark was that the user of a tool is at a > higher level of abstraction than the tool itself. =A0SQL is a tool. =A0ORM= s > are tools that use SQL to get their job done, just like compilers use > assembly to get their job done. =A0In that sense ORMs live at a higher > level of abstraction than SQL. Lets have an example: There are many "compiler" products translating from a high-level language like ADA to a low-level language like C, instead of translating to machine code directly. What if someone wrote a "compiler" translating C source code to ADA source code, would that make C more high level than ADA? Hardly? The existance of a product translating from language A to language B doesn't say anything about the levels of A and B. > The members of c.d.t. might respond negatively to that idea because > they see SQL as a better vehicle to do the job that the ORM is trying > to do. That's fine, but does not change the fact that the ORM is using > SQL as an implementation language. If a RDBMS product is implemented using an OOPL, does that make relational algebra more high level than OO? //frebe
On Mar 14, 2:17 pm, frebe <freb...@gmail.com> wrote: > On 13 Mar, 18:40, Robert Martin <uncle...@objectmentor.com> wrote: > > > The real point of that remark was that the user of a tool is at a > > higher level of abstraction than the tool itself. SQL is a tool. ORMs > > are tools that use SQL to get their job done, just like compilers use > > assembly to get their job done. In that sense ORMs live at a higher > > level of abstraction than SQL. > > Lets have an example: There are many "compiler" products translating > from a high-level language like ADA to a low-level language like C, > instead of translating to machine code directly. What if someone wrote > a "compiler" translating C source code to ADA source code, would that > make C more high level than ADA? Hardly? The existance of a product > translating from language A to language B doesn't say anything about > the levels of A and B. In any case, Marshall nicely described the difficulty of writing a reasonable ORM in the first place. This has more to do with the impedance mismatch than with some notion of "higher level". The RM is suited (and only suited) to representing and querying information in the form of large sets of propositions, whereas OO is suited (and only suited) to building state machines within an abstract computational machine. I would hardly call a set of propositions a state machine, and I would hardly call a state machine a set of propositions. Therefore there isn't really any overlap in the purpose of OO and RM. Not surprisingly the best approach to build a system is often a hybrid of the two. The idea to directly map a tuple (or "record") to an object reveals confusion. A tuple is a non-scalar value representing an immutable fact. An object is an identifiable state machine within a larger abstract computational machine. The RM has a sound mathematical basis, and it's clear that the RM will stand the test of time. Any OO programmer who thinks it's reasonable to wrap RM behind an ORM doesn't understand the RM, its fundamental importance, the extent to which data can be decoupled from application and the tremendous value in doing so.
On Thu, 13 Mar 2008 22:17:40 -0700 (PDT), frebe wrote: > On 13 Mar, 18:40, Robert Martin <uncle...@objectmentor.com> wrote: >> The real point of that remark was that the user of a tool is at a >> higher level of abstraction than the tool itself. �SQL is a tool. �ORMs >> are tools that use SQL to get their job done, just like compilers use >> assembly to get their job done. �In that sense ORMs live at a higher >> level of abstraction than SQL. > > Lets have an example: There are many "compiler" products translating > from a high-level language like ADA to a low-level language like C, > instead of translating to machine code directly. What if someone wrote > a "compiler" translating C source code to ADA source code, would that > make C more high level than ADA? Hardly? The existance of a product > translating from language A to language B doesn't say anything about > the levels of A and B. Right. What does it, is the difficulty of designing such a compiler. Clearly within the set of Turing-complete languages you could translate from whatever language to any other. But, while translation from Ada to C is considerably difficult (mainly because C is ill-defined), a good translation from C to Ada is almost impossible. "Good" means deducing higher-level constructs from the code. It is much like literary translation, and requires general intelligence. Ah, ADA = The Americans with Disabilities Act of 1990, the programming language is called "Ada", named after Augusta Ada King, Countess of Lovelace http://en.wikipedia.org/wiki/Ada_Lovelace >> The members of c.d.t. might respond negatively to that idea because >> they see SQL as a better vehicle to do the job that the ORM is trying >> to do. That's fine, but does not change the fact that the ORM is using >> SQL as an implementation language. > > If a RDBMS product is implemented using an OOPL, does that make > relational algebra more high level than OO? That depends on the application domain, the comparison is conditional to that. Further, some things might be incomparable. I agree that the fact of use alone does not imply the outcome of a comparison. There are all sorts of use, some are quite meaningless. I remember in student times, we implemented gotos (which were prohibited) using structural constructs, just to protest and to have fun. Abstraction inversion is a quite common thing. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
Quoth S Perryman: > Consider yourself promoted to uber-muppet. When did "muppet" become an insult? -- Jon
topmind wrote: > S Perryman wrote: SP>Let me rephrase the question. What SPECIFICLY did I say about "types" SP>that is objectively wrong? >> >>For the umpteenth time : >> >><quote> >>types *tend* to "rely on similar >>hierarchical taxonomies (or at least DAG taxonomies) > How many popular languages can you name that DON'T rely on trees or > DAGS for type matching and equivalency detection? > And that claim is not about types, but rather *usage* of types. Here are some very popular prog langs for the following arenas : commercial, general-purpose, Internet, safety-critical The prog langs : COBOL, C, Javascript, Ada(83) Which of them rely on "trees or DAGS for type matching and equivalency detection" ?? None. QED. Additionally, you exposed your (previously indicated) ignorance about the fundamentals of type theory. For type matching is always done on the basis of type *name* and/or *structure* . Neither of which require "trees or DAGs" . TM>I was trying to guess what you implied. You create vaguery and then TM>blame me when I try to clean it up by paraphrasing you with more TM>precision. Typical. >>How can informing someone as to who invented inheritance in OOP, and the >>reasons why, be "vaguery" ?? >>Please feel free to tell us. > I was addressing the issue raised by non-me of the utility of > inheritance, not its invention. 1. Yes, in response to a *completely different* discussion. 2. You have not been able to show us anything in my posting that is "vaguery" (surprise surprise) . Once again, you have suffered a 'typing Tourettes' attack, but couldn't even direct it to a vaguely relevant posting in the debate. It is has been most amusing watching you squirm like an impaled worm on yet another episode of your muppetry. And as always, Usenet archives record them for posterity Regards, Steven Perryman
![]() |
0 |
![]() |
"Jon Heggland" <jon.heggland@ntnu.no> wrote in message news:frdi08$mvo$1@kuling.itea.ntnu.no... > Quoth S Perryman: > > Consider yourself promoted to uber-muppet. > > When did "muppet" become an insult? Hey, it isn't easy being right!
David Cressey wrote: > "Jon Heggland" <jon.heggland@ntnu.no> wrote in message > news:frdi08$mvo$1@kuling.itea.ntnu.no... >>Quoth S Perryman: >>>Consider yourself promoted to uber-muppet. >>When did "muppet" become an insult? http://en.wikipedia.org/wiki/Muppet_%28slang%29 > Hey, it isn't easy being right! Not sure what your comment has to do with the term "muppet" . Regards, Steven Perryman
![]() |
0 |
![]() |
On 14 Mar, 10:43, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote: > On Thu, 13 Mar 2008 22:17:40 -0700 (PDT), frebe wrote: > > On 13 Mar, 18:40, Robert Martin <uncle...@objectmentor.com> wrote: > >> The real point of that remark was that the user of a tool is at a > >> higher level of abstraction than the tool itself. =A0SQL is a tool. =A0= ORMs > >> are tools that use SQL to get their job done, just like compilers use > >> assembly to get their job done. =A0In that sense ORMs live at a higher > >> level of abstraction than SQL. > > > Lets have an example: There are many "compiler" products translating > > from a high-level language like ADA to a low-level language like C, > > instead of translating to machine code directly. What if someone wrote > > a "compiler" translating C source code to ADA source code, would that > > make C more high level than ADA? Hardly? The existance of a product > > translating from language A to language B doesn't say anything about > > the levels of A and B. > > Right. What does it, is the difficulty of designing such a compiler. > Clearly within the set of Turing-complete languages you could translate > from whatever language to any other. But, while translation from Ada to C > is considerably difficult (mainly because C is ill-defined), a good > translation from C to Ada is almost impossible. That's why the OO camp has such problems with making a good ORM. If SQL would have been low-level, compared to the network model, the task would have been much easier. //frebe
On Mar 5, 11:23 pm, Robert Martin <uncle...@objectmentor.com> wrote: > On 2008-03-05 10:16:30 -0600, topmind <topm...@technologist.com> said: > > > On Mar 4, 10:52 pm, Robert Martin <uncle...@objectmentor.com> wrote: > >> On 2008-03-03 17:25:48 -0600, topmind <topm...@technologist.com> said: > > >>> But I think anybody inspecting both examples will clearly see that my > >>> version is a lot less total code. > > >> C++ is a pretty wordy language. If I wrote it in Ruby I bet I'd beat > >> you by a wide margin. > > > I am a skeptical, but you are welcome to try. And, it would probably > > be the meta features of Ruby that cut it down, not OOP. > > The "meta" features of Ruby *are* OO. You have a tendency to label everything and everyone as "OO". Plus, doing meta in an OO way is not necessarily more compact than doing meta in a non-OO way. > > -- > Robert C. Martin (Uncle Bob) | email: uncle...@objectmentor.com > Object Mentor Inc. | blog: www.butunclebob.com > The Agile Transition Experts | web: www.objectmentor.com > 800-338-6716 | -T-
S Perryman wrote: > topmind wrote: > > > S Perryman wrote: > > SP>Let me rephrase the question. What SPECIFICLY did I say about "types" > SP>that is objectively wrong? > >> > >>For the umpteenth time : > >> > >><quote> > > >>types *tend* to "rely on similar > >>hierarchical taxonomies (or at least DAG taxonomies) > > > How many popular languages can you name that DON'T rely on trees or > > DAGS for type matching and equivalency detection? > > > And that claim is not about types, but rather *usage* of types. > > Here are some very popular prog langs for the following arenas : > > commercial, general-purpose, Internet, safety-critical > > The prog langs : COBOL, C, Javascript, Ada(83) > > Which of them rely on "trees or DAGS for type matching and equivalency > detection" ?? > > None. QED. Can you demonstrate a type cycle (circular reference) allowed in C? > > Additionally, you exposed your (previously indicated) ignorance > about the fundamentals of type theory. For type matching is always done > on the basis of type *name* and/or *structure* . > > Neither of which require "trees or DAGs" . I never claimed "required". You are putting words in my mouth. > > > TM>I was trying to guess what you implied. You create vaguery and then > TM>blame me when I try to clean it up by paraphrasing you with more > TM>precision. Typical. > > >>How can informing someone as to who invented inheritance in OOP, and the > >>reasons why, be "vaguery" ?? > > >>Please feel free to tell us. > > > I was addressing the issue raised by non-me of the utility of > > inheritance, not its invention. > > 1. Yes, in response to a *completely different* discussion. That still would not make it "off topic". > > 2. You have not been able to show us anything in my posting that is > "vaguery" (surprise surprise) . It wouldn't do any good if I did. Delusional people are usually not fixable. > > > Once again, you have suffered a 'typing Tourettes' attack, but couldn't > even direct it to a vaguely relevant posting in the debate. > > It is has been most amusing watching you squirm like an impaled worm on > yet another episode of your muppetry. And as always, Usenet archives record > them for posterity Speaking of shameful record, were you the one who claimed a p/r version of the publications exampled would have to have a "combinatorial explosion", which you failed to prove and tried to change the subject? Or was that lameman? I get the two of you mixed up. > > > Regards, > Steven Perryman -T-
S Perryman wrote: > David Cressey wrote: > > > "Jon Heggland" <jon.heggland@ntnu.no> wrote in message > > news:frdi08$mvo$1@kuling.itea.ntnu.no... > > >>Quoth S Perryman: > > >>>Consider yourself promoted to uber-muppet. > > >>When did "muppet" become an insult? > > http://en.wikipedia.org/wiki/Muppet_%28slang%29 > > > > Hey, it isn't easy being right! > > Not sure what your comment has to do with the term "muppet" . It's a twist on a song by a muppet. Perhaps you should better study your artifacts of insults before using them. If a hooker shoves things up her [bleep] without checking them first, she's asking for it. > > > Regards, > Steven Perryman -T-
On Fri, 14 Mar 2008 06:33:45 -0700 (PDT), frebe wrote: > On 14 Mar, 10:43, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> > wrote: >> On Thu, 13 Mar 2008 22:17:40 -0700 (PDT), frebe wrote: >>> On 13 Mar, 18:40, Robert Martin <uncle...@objectmentor.com> wrote: >>>> The real point of that remark was that the user of a tool is at a >>>> higher level of abstraction than the tool itself. �SQL is a tool. �ORMs >>>> are tools that use SQL to get their job done, just like compilers use >>>> assembly to get their job done. �In that sense ORMs live at a higher >>>> level of abstraction than SQL. >> >>> Lets have an example: There are many "compiler" products translating >>> from a high-level language like ADA to a low-level language like C, >>> instead of translating to machine code directly. What if someone wrote >>> a "compiler" translating C source code to ADA source code, would that >>> make C more high level than ADA? Hardly? The existance of a product >>> translating from language A to language B doesn't say anything about >>> the levels of A and B. >> >> Right. What does it, is the difficulty of designing such a compiler. >> Clearly within the set of Turing-complete languages you could translate >> from whatever language to any other. But, while translation from Ada to C >> is considerably difficult (mainly because C is ill-defined), a good >> translation from C to Ada is almost impossible. > > That's why the OO camp has such problems with making a good ORM. If > SQL would have been low-level, compared to the network model, the task > would have been much easier. Not necessarily. Certain architectures are difficult to translate into, for vector processors. It is related to the presumption of computational equivalence. A difficulty or impossibility to translate can come from weakness of a given language. SQL is pretty weak. Clearly when SQL is used as a intermediate language for an ORM, then to have it lower level and more imperative than it is would be an advantage. But I agree that ORM is wasting time. In my why other architectures are needed (like WAN-wide persistent objects). In short DBMS to be scrapped as a concept. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Mar 14, 8:16 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote: > On Fri, 14 Mar 2008 06:33:45 -0700 (PDT), frebe wrote: > > On 14 Mar, 10:43, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> > > wrote: > >> On Thu, 13 Mar 2008 22:17:40 -0700 (PDT), frebe wrote: > >>> On 13 Mar, 18:40, Robert Martin <uncle...@objectmentor.com> wrote: > >>>> The real point of that remark was that the user of a tool is at a > >>>> higher level of abstraction than the tool itself. SQL is a tool. ORMs > >>>> are tools that use SQL to get their job done, just like compilers use > >>>> assembly to get their job done. In that sense ORMs live at a higher > >>>> level of abstraction than SQL. > > >>> Lets have an example: There are many "compiler" products translating > >>> from a high-level language like ADA to a low-level language like C, > >>> instead of translating to machine code directly. What if someone wrote > >>> a "compiler" translating C source code to ADA source code, would that > >>> make C more high level than ADA? Hardly? The existance of a product > >>> translating from language A to language B doesn't say anything about > >>> the levels of A and B. > > >> Right. What does it, is the difficulty of designing such a compiler. > >> Clearly within the set of Turing-complete languages you could translate > >> from whatever language to any other. But, while translation from Ada to C > >> is considerably difficult (mainly because C is ill-defined), a good > >> translation from C to Ada is almost impossible. > > > That's why the OO camp has such problems with making a good ORM. If > > SQL would have been low-level, compared to the network model, the task > > would have been much easier. > > Not necessarily. Certain architectures are difficult to translate into, for > vector processors. It is related to the presumption of computational > equivalence. A difficulty or impossibility to translate can come from > weakness of a given language. SQL is pretty weak. > > Clearly when SQL is used as a intermediate language for an ORM, then to > have it lower level and more imperative than it is would be an advantage. > > But I agree that ORM is wasting time. In my why other architectures are > needed (like WAN-wide persistent objects). In short DBMS to be scrapped as > a concept. > One of the problems in translation is that OO usually makes a big distinction between an individual object and a collection of objects, whereas in RDBMS there is no real difference: going from 1 to a million is seamless (outside of performance issues). In OO, the collection is usually a different object/class than the items in the collection. (This is a manifestation of the set-oriented thinking versus navigational structures of OO.) Nobody has figured out how to use encapsulation to hide the difference. Cursor-like techniques can be applied, but that just makes OO look like a half-finished (navigational) database because objects then must start following global collection-oriented rules, losing the self- handling-noun feel that makes OO feel like OO. I don't think anyone can solve the impedance mismatch without turning one into the other. Each allows/requires different areas of freedoms and restrictions. If you plug up one, you have to open another to compensate, and visa versa. > -- > Regards, > Dmitry A. Kazakovhttp://www.dmitry-kazakov.de -T-
On Fri, 14 Mar 2008 08:55:55 -0700 (PDT), topmind wrote: > One of the problems in translation is that OO usually makes a big > distinction between an individual object and a collection of objects, > whereas in RDBMS there is no real difference: going from 1 to a > million is seamless (outside of performance issues). In OO, the > collection is usually a different object/class than the items in the > collection. (This is a manifestation of the set-oriented thinking > versus navigational structures of OO.) Nobody has figured out how to > use encapsulation to hide the difference. LOL. Dear you should really read something introductory on set theory, just in order to never post anything like that. The distinction between set and the elements of, plays a central role in modern mathematics. Otherwise see http://en.wikipedia.org/wiki/Barber_paradox -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
topmind wrote: > S Perryman wrote: TM>How many popular languages can you name that DON'T rely on trees or TM>DAGS for type matching and equivalency detection? TM>And that claim is not about types, but rather *usage* of types. >>Here are some very popular prog langs for the following arenas : >>commercial, general-purpose, Internet, safety-critical >>The prog langs : COBOL, C, Javascript, Ada(83) >>Which of them rely on "trees or DAGS for type matching and equivalency >>detection" ?? >>None. QED. > Can you demonstrate a type cycle (circular reference) allowed in C? For what purpose ?? The definition or use of such a type does not "rely on trees or DAGS for type matching and equivalency detection" . >>Additionally, you exposed your (previously indicated) ignorance >>about the fundamentals of type theory. For type matching is always done >>on the basis of type *name* and/or *structure* . >>Neither of which require "trees or DAGs" . > I never claimed "required". You are putting words in my mouth. Feel free to show us how/why : "type matching and equivalency detection" "tend to rely on similar hierarchical taxonomies (or at least DAG taxonomies)" . >>2. You have not been able to show us anything in my posting that is >> "vaguery" (surprise surprise) . > It wouldn't do any good if I did. Of course it would. For the sake of other people, and Usenet archive posterity at least. But tis a convenient excuse *not* to have your claims scrutinised, is it not. > Delusional people are usually not fixable. Good of you to own up to that mental defect that you suffer. >>It is has been most amusing watching you squirm like an impaled worm on >>yet another episode of your muppetry. And as always, Usenet archives record >>them for posterity > Speaking of shameful record, were you the one who claimed a p/r > version of the publications exampled would have to have a > "combinatorial explosion", which you failed to prove and tried to > change the subject? Or was that lameman? I get the two of you mixed up. Did you actually manage to understand your own "solution" well enough to be able to show what it outputs with the required input data (ie provide a functional equivalent of the type substitutabilty example as was defined on day 1) ?? If so, and you can demonstrate so to me, I am only too happy to resume that particular debate and show you the combinatorial problem inherent in your "solution" . If not, *shame on you* for prevaricating (and wasting Usenet resource) , in order to avoid admitting (again) insufficient understanding of english to do the things asked of you. Regards, Steven Perryman
![]() |
0 |
![]() |
S Perryman wrote: > ... (various trolling snipped, nothing that matters). Topmind, you speak a higher class of language than this cross-poster who doesn't seem to know it. As much as I would miss your comments, were you to ignore him, I for one would be happy to see his irrelevancies go away. Just my two cents.
![]() |
0 |
![]() |
paul c wrote: > S Perryman wrote: > >> ... (various trolling snipped, nothing that matters). > > > Topmind, you speak a higher class of language than this cross-poster who > doesn't seem to know it. As much as I would miss your comments, were > you to ignore him, I for one would be happy to see his irrelevancies go > away. Just my two cents. I am not sure topmind will see the post if you only post it in c.d.t
![]() |
0 |
![]() |
On Mar 14, 9:32 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote: > > LOL. Dear you should really read something introductory on set theory, just > in order to never post anything like that. The distinction between set and > the elements of, plays a central role in modern mathematics. If you read something *introductory* on set theory, you will in general see sets described using one set of terms, and elements of sets in different terms. (Although you won't necessarily see them described as belonging to distinct classes; it is usually left unspecified.) If you get past the introductory phase, however, the distinction evaporates. The specific axiomatic set theory that could most be said to play a central role is modern mathematics is ZFC, an axiomatic theory which does not admit the existence of anything that is *not* a set. *Every* element of every set in ZFC is itself a set; numbers are encoded as sets, etc. Of course there are less popular axiomatic theories such as Quine's New Foundations, which has a variant, NFU, which *does* draw a formal distinction between sets and elements. But by and large, the distinction between sets and elements does not exist in modern mathematics. Marshall
On Mar 14, 7:16 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote: > > In short DBMS to be scrapped as a concept. Oh, I absolutely agree. In the modern context, things like structure, integrity, and manipulation of data are passe! For my own tastes, data corruption cannot happen fast enough. Why waste time *managing* you data when you can just splat it out on the network to fend for itself? What the hell did data ever do for me anyway? Marshall
"Marshall" <marshall.spight@gmail.com> wrote in message news:2232ca10-8176-4511-9ba9-ef1822fd83dd@e25g2000prg.googlegroups.com... > On Mar 14, 7:16 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> > wrote: >> >> In short DBMS to be scrapped as a concept. > > Oh, I absolutely agree. In the modern context, things like structure, > integrity, and manipulation of data are passe! For my own tastes, > data corruption cannot happen fast enough. Why waste time *managing* > you data when you can just splat it out on the network to fend for > itself? What the hell did data ever do for me anyway? > Funny...for a moment I thought you were referring to the news media. They aren't to be bothered with integrity, they're corrupt--slanting the information to further their own agenda, and lately they haven't even been checking sources--just splat it out on the network whether it's true or not. > > Marshall >
Bob Badour wrote: .... > I am not sure topmind will see the post if you only post it in c.d.t Oh, pardon me I thought topmind had posted db-only topics but maybe I have him mixed up with somebody else. I like a little nonsense, post some myself and absurd points sometimes help me get my bearings but every once in a while some people persist ad nauseum and make a real nuisance of themselves and sometimes it helps if a relative stranger tells them so. The only way I could get a guy I once worked with off his one-track pathway was to curse him up and down the hallway every six months in easy earshot of the whole office. I wouldn't try that with everybody but it worked with him, he would immediately become contrite and for the next five months he'd be reasonable and consider what other people had to say, then he'd slip back into his natural ways - he was very clever but I thought he was also a mild sociopath.
![]() |
0 |
![]() |
On Mar 15, 12:16 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote: > On Fri, 14 Mar 2008 06:33:45 -0700 (PDT), frebe wrote: > > > That's why the OO camp has such problems with making a good ORM. If > > SQL would have been low-level, compared to the network model, the task > > would have been much easier. > > Not necessarily. Certain architectures are difficult to translate into, for > vector processors. It is related to the presumption of computational > equivalence. A difficulty or impossibility to translate can come from > weakness of a given language. SQL is pretty weak. > > Clearly when SQL is used as a intermediate language for an ORM, then to > have it lower level and more imperative than it is would be an advantage. > > But I agree that ORM is wasting time. In my why other architectures are > needed (like WAN-wide persistent objects). In short DBMS to be scrapped as > a concept. I expect you like the idea of distributed OO, orthogonal persistence, location transparency and so on. However the literature is hardly compelling. There is the problem of - finding a consistent cut (ie that respects the happened-before relation) - the contradiction between transactions and orthogonal persistence - the contradiction between rolling back a transaction and orthogonal persistence - the impossibility of reliable distributed transactions - the fact that synchronous messages over the wire can easily be a million times slower than in-process calls - the fallibility of distributed synchronous messages which contradicts location transparency - the enormously difficult problem of distributed locking * how to avoid concurrency bottlenecks * when to release locks in the presence of network or machine failures * distributed deadlock detection. * rolling back a distributed transaction. - how to schema evolve a distributed OO system assuming orthogonal persistence and location transparency. - how to manage security when a process exposes many of its objects for direct communication with objects in another process. Persistent, distributed state machines raise more questions than answers. Persistent distributed encoded values provide a much better basis for building a system. SOA suggests that a large system should be decomposed by behaviour (ie "services") which is basically an OO way of thinking. It is a flawed approach to the extent that it is promoted as the main way to build enterprise systems. The only proven scalable approach is to remain data-centric at ever increasing scales. The easiest way for distributed applications to communicate is indirectly via shared data rather than by direct communication. This is implicit with a data-centric approach. The WWW is data-centric. It is not at all surprising that Http on port 80 is *much* more common than RPC, CORBA, DCOM, RMI and SOAP put together. Http concerns messages used to access data instead of messages used to elicit behaviour.
On Mar 14, 5:59 pm, David BL <davi...@iinet.net.au> wrote: > > - the impossibility of reliable distributed transactions Are they actually impossible? I know that distributed consensus is impossible; Byzantine generals and all. I know little about transactions, but my vague impression was that 2PC was stronger than just an illusion. > - the fact that synchronous messages over the wire can easily > be a million times slower than in-process calls This is *crucial.* > - the fallibility of distributed synchronous messages which > contradicts location transparency This is manageable when you combine synchronous and asynchronous messaging. Synchronous idempotent messaging that does not depend on the identity of the receiver is actually pretty easy; asynchronous messaging should handle as much as possible. Synchronous messaging that does depend on the identity of the receiver should be kept to an absolute minimum, and has to be tolerant of failure. The guidelines I just described might exclude certain classes of applications (I'm not certain) but many, many things can be done this way. Location transparency as a practical reality is achievable and in fact absolutely tits when done well. > - how to schema evolve a distributed OO system assuming > orthogonal persistence and location transparency. This is one of the things that convinces me of the superiority of structural type systems for distributed computing. My expectation is that languages are going to face evolutionary pressure in the direction of features that are distributed- and multi-core- friendly. Schema evolution for nominally typed languages has proven to be quite brittle. I am not convinced it is *necessarily* so, but it begins to look like it. > SOA suggests that a large system should be decomposed by behaviour (ie > "services") which is basically an OO way of thinking. It is a flawed > approach to the extent that it is promoted as the main way to build > enterprise systems. The only proven scalable approach is to remain > data-centric at ever increasing scales. Mmmm, I mostly agree but maybe you said it a bit too strong. Datacenter-services as an approach works okay, and even scales, provided you don't need much flexibility and can keep clients and servers closely coupled. Okay come to think of it that's a pretty bad situation to be in. I changed my mind: you have a good point. > The WWW is data-centric. It is not at all surprising that Http on > port 80 is *much* more common than RPC, CORBA, DCOM, RMI and SOAP put > together. Http concerns messages used to access data instead of > messages used to elicit behaviour. Interesting point. My high-level viewpoint: two important success factors for distributed computing turn out to be<drumroll>: logical independence and physical independence, at the (network) protocol level! Surprise! (I apologize for the buzzword-density of this post.) Marshall
On Mar 15, 12:14 pm, Marshall <marshall.spi...@gmail.com> wrote: > On Mar 14, 5:59 pm, David BL <davi...@iinet.net.au> wrote: > > > > > - the impossibility of reliable distributed transactions > > Are they actually impossible? I know that distributed consensus > is impossible; Byzantine generals and all. I know little about > transactions, but my vague impression was that 2PC was > stronger than just an illusion. I mean 100% reliable in the face of arbitrary network failures Date says it like this... "There does not exist any finite protocol that will guarantee that all participants will commit successful transactions in unison and roll back unsuccessful transactions in unison, in the face of arbitrary failures" The proof is very simple. See "An introduction to DB systems" edition 8, page 668. > > - the fact that synchronous messages over the wire can easily > > be a million times slower than in-process calls > > This is *crucial.* > > > - the fallibility of distributed synchronous messages which > > contradicts location transparency > > This is manageable when you combine synchronous and > asynchronous messaging. Synchronous idempotent messaging > that does not depend on the identity of the receiver is actually > pretty easy; asynchronous messaging should handle as much > as possible. Synchronous messaging that does depend on > the identity of the receiver should be kept to an absolute > minimum, and has to be tolerant of failure. The guidelines > I just described might exclude certain classes of applications > (I'm not certain) but many, many things can be done this way. > > Location transparency as a practical reality is achievable and > in fact absolutely tits when done well. I'm certainly not saying the idea of location transparency is completely worthless. Clearly there exist useful distributed state machines where location transparency has been achieved. However, there exist many more examples where in-process objects cannot be moved out of process without breaking the system (in the presence of network failure or the many orders of magnitude drop in performance). In practise location transparency has to be designed for, which in a way is at odds with its premise. > > - how to schema evolve a distributed OO system assuming > > orthogonal persistence and location transparency. > > This is one of the things that convinces me of the superiority > of structural type systems for distributed computing. My expectation > is that languages are going to face evolutionary pressure in > the direction of features that are distributed- and multi-core- > friendly. > > Schema evolution for nominally typed languages has proven > to be quite brittle. I am not convinced it is *necessarily* so, > but it begins to look like it. I believe I understand the distinction between nominal/structural, but I don't know why you would say structural is better for distributed computing. Can you elaborate? True orthogonal persistence implies that everything persists - even threads. In its purist form the idea is to be able to turn off a computer and later turn it on again and all threads and processes continue running as if it never was switched off. This eliminates the need for transactions and in fact for the programmer to care at all about the distinction between persistent and transient objects. This depends on finding a so called "consistent cut". This is bad enough on a single machine never mind trying to do it in a distributed system. BTW I know you are interested in lattice theory, so you might be interested to know that the consistent cuts form a lattice. Remarkably some research projects have tried to achieve this (eg Grasshopper). However the problem of schema evolution is a show stopper. How do you evolve a state machine while it is running (or has been snapshot using a consistent cut). > > SOA suggests that a large system should be decomposed by behaviour (ie > > "services") which is basically an OO way of thinking. It is a flawed > > approach to the extent that it is promoted as the main way to build > > enterprise systems. The only proven scalable approach is to remain > > data-centric at ever increasing scales. > > Mmmm, I mostly agree but maybe you said it a bit too strong. > Datacenter-services as an approach works okay, and even > scales, provided you don't need much flexibility and can keep > clients and servers closely coupled. Okay come to think of it > that's a pretty bad situation to be in. I changed my mind: you > have a good point. > > > The WWW is data-centric. It is not at all surprising that Http on > > port 80 is *much* more common than RPC, CORBA, DCOM, RMI and SOAP put > > together. Http concerns messages used to access data instead of > > messages used to elicit behaviour. > > Interesting point. > > My high-level viewpoint: two important success factors for > distributed computing turn out to be<drumroll>: logical independence > and physical independence, at the (network) protocol level! > > Surprise! > > (I apologize for the buzzword-density of this post.)
S Perryman wrote: > topmind wrote: > > > S Perryman wrote: > > TM>How many popular languages can you name that DON'T rely on trees or > TM>DAGS for type matching and equivalency detection? > > TM>And that claim is not about types, but rather *usage* of types. > > >>Here are some very popular prog langs for the following arenas : > > >>commercial, general-purpose, Internet, safety-critical > > >>The prog langs : COBOL, C, Javascript, Ada(83) > > >>Which of them rely on "trees or DAGS for type matching and equivalency > >>detection" ?? > > >>None. QED. > > > Can you demonstrate a type cycle (circular reference) allowed in C? > > For what purpose ?? What structure would you describe them as? Compilers use search algorithms against internal data structures of program/syntax representation to determine what type something is (or is compatible with). Such can usually be characterized by data structures such as trees, stacks, DAGs, graphs, etc. I should point out that many set systems don't have cycles either, and could be represented with DAGs if one wanted to. > > The definition or use of such a type does not "rely on trees or DAGS for > type matching and equivalency detection" . > > > >>Additionally, you exposed your (previously indicated) ignorance > >>about the fundamentals of type theory. For type matching is always done > >>on the basis of type *name* and/or *structure* . > > >>Neither of which require "trees or DAGs" . > > > I never claimed "required". You are putting words in my mouth. > > Feel free to show us how/why : > > "type matching and equivalency detection" > "tend to rely on similar hierarchical taxonomies (or at least DAG > taxonomies)" . > Those are two different things. Requiring trees/dags and "usually using" trees/dags are not equivalent. > > >>2. You have not been able to show us anything in my posting that is > >> "vaguery" (surprise surprise) . > > > It wouldn't do any good if I did. > > Of course it would. For the sake of other people, and Usenet archive > posterity at least. > > But tis a convenient excuse *not* to have your claims scrutinised, is it > not. > > > > Delusional people are usually not fixable. > > Good of you to own up to that mental defect that you suffer. Neener Neener to you too. > > > >>It is has been most amusing watching you squirm like an impaled worm on > >>yet another episode of your muppetry. And as always, Usenet archives record > >>them for posterity > > > Speaking of shameful record, were you the one who claimed a p/r > > version of the publications exampled would have to have a > > "combinatorial explosion", which you failed to prove and tried to > > change the subject? Or was that lameman? I get the two of you mixed up. > > Did you actually manage to understand your own "solution" well enough > to be able to show what it outputs with the required input data (ie > provide a functional equivalent of the type substitutabilty example as was > defined on day 1) ?? Those were your MADE UP internal goddam requirements. I am NOT obligated to mirror them. > > If so, and you can demonstrate so to me, I am only too happy to resume that > particular debate and show you the combinatorial problem inherent in your > "solution" . > > If not, *shame on you* for prevaricating (and wasting Usenet resource) , in > order to avoid admitting (again) insufficient understanding of english to > do the things asked of you. You got yourself into a corner and are making up requirements to backpeddle. You lost, dude! Fess up. No "combinatorial explosion" is required. Eat the Truth! Toppie won that one. > > > Regards, > Steven Perryman -T-
On Fri, 14 Mar 2008 18:59:49 -0700 (PDT), David BL wrote: > On Mar 15, 12:16 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> > wrote: >> On Fri, 14 Mar 2008 06:33:45 -0700 (PDT), frebe wrote: >> >>> That's why the OO camp has such problems with making a good ORM. If >>> SQL would have been low-level, compared to the network model, the task >>> would have been much easier. >> >> Not necessarily. Certain architectures are difficult to translate into, for >> vector processors. It is related to the presumption of computational >> equivalence. A difficulty or impossibility to translate can come from >> weakness of a given language. SQL is pretty weak. >> >> Clearly when SQL is used as a intermediate language for an ORM, then to >> have it lower level and more imperative than it is would be an advantage. >> >> But I agree that ORM is wasting time. In my why other architectures are >> needed (like WAN-wide persistent objects). In short DBMS to be scrapped as >> a concept. > > I expect you like the idea of distributed OO, Well, distributed OO is a different thing to me. It is when an object is distributed over a set of nodes, a kind of wave, rather than a particle... > orthogonal persistence, > location transparency and so on. Issues which might be too big to swallow in one gulp. > However the literature is hardly > compelling. There is the problem of > > - finding a consistent cut (ie that respects the > happened-before relation) > > - the contradiction between transactions and orthogonal > persistence > > - the contradiction between rolling back a transaction > and orthogonal persistence Complementarity, rather than mere contradiction. > - the impossibility of reliable distributed transactions There is no such thing as unconditionally reliable computing anyway. > - the fact that synchronous messages over the wire can easily > be a million times slower than in-process calls Huh, don't buy multi-core processors, don't use any memory except registers etc. This is not an argument, so long no concrete time constraint put down. Below you mentioned HTTP as an example. It is milliard times slower, who cares? "Bright" minds use XML as a transport level, and for that matter, interpreted SQL... > - the fallibility of distributed synchronous messages which > contradicts location transparency That depends on what is transparent to what. I don't see why synchronization should play any role here. I assume you meant something like routing to moving targets, then that would apply to both. > - the enormously difficult problem of distributed locking > * how to avoid concurrency bottlenecks > * when to release locks in the presence of network or machine > failures > * distributed deadlock detection. > * rolling back a distributed transaction. This is a sort of mixing lower and higher level synchronization abstractions. If you use transactions then locking is an implementation detail. Anyway all these problems are ones of concurrent computing in general, they are not specific to distributed computing and even less than that to OO. You can always consider concurrent remote tasks running local. > - how to schema evolve a distributed OO system assuming > orthogonal persistence and location transparency. > > - how to manage security when a process exposes many of its > objects for direct communication with objects in another > process. On per object basis. > Persistent, distributed state machines raise more questions than > answers. Persistent distributed encoded values provide a much better > basis for building a system. I am not sure what you mean here. Referential vs. by-value semantics of objects, or user-defined vs. inferred behavior of? Clearly you cannot get rid of values as well as of references (values of identity). Are you arguing for inference? Do you believe that distributed inference would help in any way? No, it will have all the problems you listed plus uncountable new others. When the behaviour is defined by the programmer/user, that moves the burden from the system to him. This makes things a lot easier. You don't need to infer that l = 2 Pi r in the steering wheel microcontroller, you can take it for granted, if the programmer says so. So long we will remain more intelligent than our programs, inference will always play a subordinate role. Once/if computers will surpass us, we will no more program them. Inference is clearly a dead end, a mental laziness. > SOA suggests that a large system should be decomposed by behaviour (ie > "services") which is basically an OO way of thinking. Well, IMO SOA is a "hype way of thinking," a marketing slogan with no substance... > It is a flawed > approach to the extent that it is promoted as the main way to build > enterprise systems. But it sells good... > The only proven scalable approach is to remain > data-centric at ever increasing scales. RDBMS sells good as well... (:-)) > The easiest way for distributed applications to communicate is > indirectly via shared data rather than by direct communication. This > is implicit with a data-centric approach. Ooch, shared data is the worst possible way. Note how hardware architectures have been moving away from shared memory. Sooner or later it should hit software design. BTW, data sharing is much in OO way. Many understand OO equivalent to referential semantics. It is a wrong perception, but taking it for simplicity of argument, functional (value semantics) fits massively parallel architectures much better. But again, it a wrong perception. OO adds user-defined semantics to values. Identity is user-defined as well. You can share or exchange, it is orthogonal to OO. > The WWW is data-centric. It is not at all surprising that Http on > port 80 is *much* more common than RPC, CORBA, DCOM, RMI and SOAP put > together. Http concerns messages used to access data instead of > messages used to elicit behaviour. That does not wonder me. WWW evolves from bottom to top. Mammals have the gills when they undergo the embryo stage of development. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de
On Mar 15, 6:12 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> wrote: > On Fri, 14 Mar 2008 18:59:49 -0700 (PDT), David BL wrote: > > On Mar 15, 12:16 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> > > wrote: > >> On Fri, 14 Mar 2008 06:33:45 -0700 (PDT), frebe wrote: > > >>> That's why the OO camp has such problems with making a good ORM. If > >>> SQL would have been low-level, compared to the network model, the task > >>> would have been much easier. > > >> Not necessarily. Certain architectures are difficult to translate into, for > >> vector processors. It is related to the presumption of computational > >> equivalence. A difficulty or impossibility to translate can come from > >> weakness of a given language. SQL is pretty weak. > > >> Clearly when SQL is used as a intermediate language for an ORM, then to > >> have it lower level and more imperative than it is would be an advantage. > > >> But I agree that ORM is wasting time. In my why other architectures are > >> needed (like WAN-wide persistent objects). In short DBMS to be scrapped as > >> a concept. > > > I expect you like the idea of distributed OO, > > Well, distributed OO is a different thing to me. It is when an object is > distributed over a set of nodes, a kind of wave, rather than a particle... That sounds like fragmented objects. http://en.wikipedia.org/wiki/Fragmented_object > > orthogonal persistence, > > location transparency and so on. > > Issues which might be too big to swallow in one gulp. > > > However the literature is hardly > > compelling. There is the problem of > > > - finding a consistent cut (ie that respects the > > happened-before relation) > > > - the contradiction between transactions and orthogonal > > persistence > > > - the contradiction between rolling back a transaction > > and orthogonal persistence > > Complementarity, rather than mere contradiction. The following argument appears in "Concurrency, the fly in the ointment" by Blackburn and Zigman: The transactional model implicitly requires the dichotomy of two worlds - an internal one for the persistent data, and an external non persistent world that issues transactions over the first. This follows from the impossibility of an ACID transaction being invoked from within an (atomic) transaction - ie a transaction cannot be the basis for its own nested invocation. By definition of atomicity of a parent transaction, the durability of any nested (ie child) transaction is subject to the atomicity of the parent transaction. This is in conflict with an independent durability required by a child ACID transaction. > > - the impossibility of reliable distributed transactions > > There is no such thing as unconditionally reliable computing anyway. Sure, but it is customary to assume infallibility within a process and fallibility between processes. > > - the fact that synchronous messages over the wire can easily > > be a million times slower than in-process calls > > Huh, don't buy multi-core processors, don't use any memory except registers > etc. This is not an argument, so long no concrete time constraint put down. > Below you mentioned HTTP as an example. It is milliard times slower, who > cares? "Bright" minds use XML as a transport level, and for that matter, > interpreted SQL... You missed the point. Fine grained interchanges of messages are useful within a process but are something to be avoided between processes. The penalty is so high that distributed computing systems must account for it in the high level design. Between processes it is better to stream data asynchronously, such as the way OpenGL drawing commands are piped between client/server without any round trip delay for each drawing command. > > - the fallibility of distributed synchronous messages which > > contradicts location transparency > > That depends on what is transparent to what. I don't see why > synchronization should play any role here. I assume you meant something > like routing to moving targets, then that would apply to both. By definition, the call of an asynchronous messages (a "post") can return without knowing whether the message was received, whereas a synchronous message must block. That raises the question of what to do when the network fails. This impacts design by contract (in conflict with location transparency). > > - the enormously difficult problem of distributed locking > > * how to avoid concurrency bottlenecks > > * when to release locks in the presence of network or machine > > failures > > * distributed deadlock detection. > > * rolling back a distributed transaction. > > This is a sort of mixing lower and higher level synchronization > abstractions. If you use transactions then locking is an implementation > detail. Anyway all these problems are ones of concurrent computing in > general, they are not specific to distributed computing and even less than > that to OO. You can always consider concurrent remote tasks running local. The point is that concurrency interacts very badly with orthogonal persistence and location transparency - to the extent that it places serious doubts on whether orthogonal persistence and location transparency are useful concepts in the first place. > > - how to schema evolve a distributed OO system assuming > > orthogonal persistence and location transparency. > > > - how to manage security when a process exposes many of its > > objects for direct communication with objects in another > > process. > > On per object basis. In reality security can only be controlled at the boundary between processes and that conflicts with location transparency. Allowing direct communication between objects opens up security holes everywhere. By contrast, the data centric approach allows the inter- process message protocol to be simple and implemented entirely within the DBMS layers. > > Persistent, distributed state machines raise more questions than > > answers. Persistent distributed encoded values provide a much better > > basis for building a system. > > I am not sure what you mean here. Referential vs. by-value semantics of > objects, or user-defined vs. inferred behavior of? Clearly you cannot get > rid of values as well as of references (values of identity). I'm saying persistent data should be nothing more that persistent encoded values instead of snapshots (ie consistent cuts) of multithreaded or distributed state machines. The former is much simpler than the latter. > Are you arguing for inference? Do you believe that distributed inference > would help in any way? No, it will have all the problems you listed plus > uncountable new others. When the behaviour is defined by the > programmer/user, that moves the burden from the system to him. This makes > things a lot easier. You don't need to infer that l = 2 Pi r in the > steering wheel microcontroller, you can take it for granted, if the > programmer says so. > > So long we will remain more intelligent than our programs, inference will > always play a subordinate role. Once/if computers will surpass us, we will > no more program them. Inference is clearly a dead end, a mental laziness. > > > SOA suggests that a large system should be decomposed by behaviour (ie > > "services") which is basically an OO way of thinking. > > Well, IMO SOA is a "hype way of thinking," a marketing slogan with no > substance... > > It is a flawed > > approach to the extent that it is promoted as the main way to build > > enterprise systems. > > But it sells good... > > > The only proven scalable approach is to remain > > data-centric at ever increasing scales. > > RDBMS sells good as well... (:-)) > > > The easiest way for distributed applications to communicate is > > indirectly via shared data rather than by direct communication. This > > is implicit with a data-centric approach. > > Ooch, shared data is the worst possible way. Note how hardware > architectures have been moving away from shared memory. Sooner or later it > should hit software design. Really? Are you suggesting there is a trend away from SMP? Here is an example of the benefits of indirect communication between applications with a shared data model. We have the following applications 1. The company timesheet entry system 2. The company payroll system 3. The company email system Consider that the list of employees is managed by a DBMS and is accessible to each of these applications. Whenever any of these applications changes the shared data, all the other applications will reflect the changes. An alternative is for the applications to avoid shared data and special message protocols are developed to allow them to talk to each other. Do you agree that's not a very good solution? A third approach is to develop some kind of message oriented service that centralises the information about the employees. However then all the problems arise about distributed OO (such as terrible performance of synchronous messages, causality and consistent cuts etc). > BTW, data sharing is much in OO way. Many understand OO equivalent to > referential semantics. It is a wrong perception, but taking it for > simplicity of argument, functional (value semantics) fits massively > parallel architectures much better. > > But again, it a wrong perception. OO adds user-defined semantics to values. > Identity is user-defined as well. You can share or exchange, it is > orthogonal to OO. OO has little to do with the shared data for an enterprise > > The WWW is data-centric. It is not at all surprising that Http on > > port 80 is *much* more common than RPC, CORBA, DCOM, RMI and SOAP put > > together. Http concerns messages used to access data instead of > > messages used to elicit behaviour. > > That does not wonder me. WWW evolves from bottom to top. Mammals have the > gills when they undergo the embryo stage of development.
On Sat, 15 Mar 2008 05:58:23 -0700 (PDT), David BL wrote: > On Mar 15, 6:12 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> > wrote: >> On Fri, 14 Mar 2008 18:59:49 -0700 (PDT), David BL wrote: >>> I expect you like the idea of distributed OO, >> >> Well, distributed OO is a different thing to me. It is when an object is >> distributed over a set of nodes, a kind of wave, rather than a particle... > > That sounds like fragmented objects. > > http://en.wikipedia.org/wiki/Fragmented_object Yep, that thing. >>> However the literature is hardly >>> compelling. There is the problem of >> >>> - finding a consistent cut (ie that respects the >>> happened-before relation) >> >>> - the contradiction between transactions and orthogonal >>> persistence >> >>> - the contradiction between rolling back a transaction >>> and orthogonal persistence >> >> Complementarity, rather than mere contradiction. > > The following argument appears in "Concurrency, the fly in the > ointment" by Blackburn and Zigman: > > The transactional model implicitly requires the dichotomy of two > worlds - an internal one for the persistent data, and an external non > persistent world that issues transactions over the first. This > follows from the impossibility of an ACID transaction being invoked > from within an (atomic) transaction - ie a transaction cannot be the > basis for its own nested invocation. > > By definition of atomicity of a parent transaction, the durability of > any nested (ie child) transaction is subject to the atomicity of the > parent transaction. This is in conflict with an independent > durability required by a child ACID transaction. Clearly, durability of any effect is conditional. So what? Anyway it is about composition of transactions, I don't see how this can collide with persistence. The latter is merely a matter of the scope where an object exists. Each object has a scope. Each object is persistent within it. Whether the scope is contained by the scope of OS, or its file system, or a cluster of hosts, is no matter to the issue (when scopes are nested.) >>> - the impossibility of reliable distributed transactions >> >> There is no such thing as unconditionally reliable computing anyway. > > Sure, but it is customary to assume infallibility within a process and > fallibility between processes. Inter process communications are as [un]reliable as any other. In each case you should specify what is taken for granted (premises) and what is the subject of QoS enforcing. >>> - the fact that synchronous messages over the wire can easily >>> be a million times slower than in-process calls >> >> Huh, don't buy multi-core processors, don't use any memory except registers >> etc. This is not an argument, so long no concrete time constraint put down. >> Below you mentioned HTTP as an example. It is milliard times slower, who &g