f



Relationships error, C# Visual Studio 2005 database bug?, "the columns in table XYZ do not match an existing primary key or UNIQUE constraint", copying columns

Problem:

I replicated, using Frasier "Pro Visual C++/CLI" (2006), Chap 12, a
database that has a relationship between two tables, called Content
and Author, using a common column, called "AuthorID".  I used the
Server Explorer insider Visual Studio 2005 to do this.  It worked.

Now I tried the exact same thing from Visual Studio 2005 but this time
using C# not C++ as my language of choice.  The interface is slightly
different, but I made sure everything was done as before.  I used the
Server Explorer 'GUI' to do this, clicking and following the 'wizards'
as before.

But somehow, when using C# rather than C++, I get this error when
attempting to create a relationship between tables when using two
identical columns called "AuthorID":  "the columns in table Authors do
not match an existing primary key or UNIQUE constraint" Why?  The
columns are the same. I even tried (and this made no difference)
copying and pasting columns from one table to the other, but still I
get this error.

I tried everything, even deleting columns and rebuilding them, which
eventually created a new error that fatally compromised the database
(if you're curious, and as an aside, it created this error: "'Authors'
table - Unable to modify table.   Cannot insert the value NULL into
column 'AuthorID',  able 'DCV3_CSharp.dbo.Tmp_Authors'; column does
not allow nulls. INSERT fails.
The statement has been terminated.")

One thing (that shouldn't matter, since it didn't matter when I used C+
+ rather than C#):  the column in table "Author" is a primary key,
while it is a "foreign key" in table "Content".  But that is
irrelevant when creating a relationship, no?  Also the checkbox "Allow
Nulls" is checked "NO" for all columns.  This did not matter when
successfully compiling /building the relationship when using C++, but
I wonder if perhaps C# is less forgiving, and maybe I should check
"YES" for allowing Nulls?

Very bizarre.  If anybody has any advice please offer it.  I think SQL
is a dinosaur language and should be scraped, and RDBMS are obsolete
in view of cheap memory, faster processors, and managed pointers, but
that's another matter.

Thanks.

RL

0
raylopez99 (937)
7/15/2007 12:05:16 AM
comp.databases.ms-sqlserver 14567 articles. 0 followers. janinepelletier (108) is leader. Post Follow

33 Replies
921 Views

Similar Articles

[PageSpeed] 55

raylopez99 (raylopez99@yahoo.com) writes:
> Very bizarre.  If anybody has any advice please offer it.  I think SQL
> is a dinosaur language and should be scraped, and RDBMS are obsolete
> in view of cheap memory, faster processors, and managed pointers, but
> that's another matter.

In that case, should I even bother to answer?

Memory may be cheap, but 1TB of memory is still quite expensive. And
when power is turned off, not much remains of what once was there.

And if you think SQL should be scrapped, permit me to point out that
most SQL queries written in C++ or C# would require a lot more code.
And, what worse is, the code would not be able to adapt to changes in
data distribution or addition of new indexes. The point with SQL is
that you say what result you want - the optimizer finds out the best
way to compute that result.

As for your problem, you appear to be working with some graphical tools
that I have little experience of. (And I don't have much faith in.) But 
there is an apparent misconception:

> But somehow, when using C# rather than C++, I get this error when
> attempting to create a relationship between tables when using two
> identical columns called "AuthorID":  "the columns in table Authors do
> not match an existing primary key or UNIQUE constraint" Why?  The
> columns are the same. I even tried (and this made no difference)
> copying and pasting columns from one table to the other, but still I
> get this error.
>... 
> One thing (that shouldn't matter, since it didn't matter when I used C+
> + rather than C#):  the column in table "Author" is a primary key,
> while it is a "foreign key" in table "Content".  But that is
> irrelevant when creating a relationship, no?  

No, that is the essence of a relationship. Or rather that is the 
relationship.

In a relational database, all tables should have a primary key which
uniquely identifies the data. Ideally, the PK should be drawn from
the data itself. For instance, in a table of countries, the PK should
be the country code according to ISO 8601. Often, though, it is more
convenient to introduce a surrogate key, that is a system-assigned
number to each row. Care should still be take to avoid duplicates,
that is, Mark Twain should only appear once in a table. An important
thing to point out about primary keys is that a PK can consist of more
than one column.

Now, we have table called Authors. Somewhere else we also have a table
called Books, in which the PK would typically be the ISBN. Of course,
when we have a book, we want to know the author. So in the Books
table is there an Authors column? Nah, a book can have several authors
so that is not really a good idea. Instead there is a table AuthorBooks
which have two columns AuthorID, ISBN. This table states which authors
what wrote which books. The primary key of this table is (AuthorID, ISBN).
But the columns AuthorID and ISBN are also foreign keys in this table.
Not together, but on their own. AuthorID refers to Authors and ISBN
refers to Books. The foreign key is constraint: you cannot add an
AuthorID in AuthorBooks if this AuthorID does not exist in Authors.

The best way to learn SQL and relational databases is not through 
graphical tools that do things behind the scenes, and far from always
have gotten things right. Instead learn the proper commands and get
a book which is focused on learning relational databases.


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
7/15/2007 9:32:07 AM
On 15 Jul, 01:05, raylopez99 <raylope...@yahoo.com> wrote:
> Problem:
>
> I replicated, using Frasier "Pro Visual C++/CLI" (2006), Chap 12, a
> database that has a relationship between two tables, called Content
> and Author, using a common column, called "AuthorID".  I used the
> Server Explorer insider Visual Studio 2005 to do this.  It worked.
>
> Now I tried the exact same thing from Visual Studio 2005 but this time
> using C# not C++ as my language of choice.  The interface is slightly
> different, but I made sure everything was done as before.  I used the
> Server Explorer 'GUI' to do this, clicking and following the 'wizards'
> as before.
>
> But somehow, when using C# rather than C++, I get this error when
> attempting to create a relationship between tables when using two
> identical columns called "AuthorID":  "the columns in table Authors do
> not match an existing primary key or UNIQUE constraint" Why?  The
> columns are the same. I even tried (and this made no difference)
> copying and pasting columns from one table to the other, but still I
> get this error.
>
> I tried everything, even deleting columns and rebuilding them, which
> eventually created a new error that fatally compromised the database
> (if you're curious, and as an aside, it created this error: "'Authors'
> table - Unable to modify table.   Cannot insert the value NULL into
> column 'AuthorID',  able 'DCV3_CSharp.dbo.Tmp_Authors'; column does
> not allow nulls. INSERT fails.
> The statement has been terminated.")
>
> One thing (that shouldn't matter, since it didn't matter when I used C+
> + rather than C#):  the column in table "Author" is a primary key,
> while it is a "foreign key" in table "Content".  But that is
> irrelevant when creating a relationship, no?  Also the checkbox "Allow
> Nulls" is checked "NO" for all columns.  This did not matter when
> successfully compiling /building the relationship when using C++, but
> I wonder if perhaps C# is less forgiving, and maybe I should check
> "YES" for allowing Nulls?
>

Keys are certainly not irrelevant to your question. They are
essential. Unfortunately it's hard to be sure what happened based on a
description of what you did in the GUI. I would advise you to use
either a decdicated data modelling tool or SQL itself to make these
sort of changes. I would not recommend VS as a design tool.

The change you want to make should correspond to something like this:

CREATE TABLE Author (AuthorID INT NOT NULL PRIMARY KEY);

CREATE TABLE Content (AuthorID INT NOT NULL /* PRIMARY KEY NOT
SPECIFIED! */);

ALTER TABLE Content
 ADD CONSTRAINT fk01 FOREIGN KEY (AuthorID)
  REFERENCES Author (AuthorID);

In other words, AuthorID MUST be a PRIMARY/UNIQUE key in the target
table being referenced.


> I think SQL
> is a dinosaur language and should be scraped

I agree that SQL is well overdue for replacement. Unfortunately it
can't happen overnight.


> RDBMS are obsolete
> in view of cheap memory, faster processors, and managed pointers, but
> that's another matter.

That's not a coherent line of reasoning at all. An RDBMS implements a
logical model of data that is independent of the hardware used. If
processors and memory are faster and cheaper then that makes RDBMS
faster and cheaper too. What do you propose as an alternative?

--
David Portas, SQL Server MVP

Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.

SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--

0
7/15/2007 9:40:30 AM
On Jul 15, 2:40 am, David Portas
<REMOVE_BEFORE_REPLYING_dpor...@acm.org> wrote:
> On 15 Jul, 01:05, raylopez99 <raylope...@yahoo.com> wrote:
>
>
> Keys are certainly not irrelevant to your question. They are
> essential. Unfortunately it's hard to be sure what happened based on a
> description of what you did in the GUI. I would advise you to use
> either a decdicated data modelling tool or SQL itself to make these
> sort of changes. I would not recommend VS as a design tool.

Sounds like sound advice.  I just upgraded today from the free SQL
Server 2005 Express to the "Developer" version, which has all the
features of the Enterprise version (for a mere $50), except actual
commercial use, which is fine with me since I'm a hobbyiest.


>
> The change you want to make should correspond to something like this:
>
> CREATE TABLE Author (AuthorID INT NOT NULL PRIMARY KEY);
>
> CREATE TABLE Content (AuthorID INT NOT NULL /* PRIMARY KEY NOT
> SPECIFIED! */);
>
> ALTER TABLE Content
>  ADD CONSTRAINT fk01 FOREIGN KEY (AuthorID)
>   REFERENCES Author (AuthorID);
>
> In other words, AuthorID MUST be a PRIMARY/UNIQUE key in the target
> table being referenced.

You may be onto something.  AuthorID is indeed a primary/unique key
in the target table, but since this 'relationship' is being set up in
the early stages of table creation (albeit all tables are saved) then
perhaps "behind the scenes" the VS 2005 IDE is not yet fully cognizant
of this fact.  I am firmly convinced it's a bug, since VS2005 does not
create this same error message when working within C++ as a language
(even with the same dB).

Also if you know of a way (format) to "export" dB for debugging
online, as in this group, please let me know.

>
> > I think SQL
> > is a dinosaur language and should be scraped
>
> I agree that SQL is well overdue for replacement. Unfortunately it
> can't happen overnight.
>
> > RDBMS are obsolete
> > in view of cheap memory, faster processors, and managed pointers, but
> > that's another matter.
>
> That's not a coherent line of reasoning at all. An RDBMS implements a
> logical model of data that is independent of the hardware used. If
> processors and memory are faster and cheaper then that makes RDBMS
> faster and cheaper too. What do you propose as an alternative?
>

Pointers indexing, using CAM and a Hash Table, one big, fat, flat
file.  THe old way of doing things before IBM researcher E.M. Codd
proposed RDBMS in 1970.

Thanks for your time David Portas.  For this particular problem, I
figure that for within C# using VS2005, given using the SQL Server
2005 Express version to create your dB, that the "relationships"
feature is broken.   There's a workaround (use C++) and/or I'll see
what happens when I upgrade to the Developer version of SQL Server
2005.


RL

0
raylopez99 (937)
7/15/2007 11:05:04 AM
On Jul 15, 2:32 am, Erland Sommarskog <esq...@sommarskog.se> wrote:
> raylopez99(raylope...@yahoo.com) writes:
> > Very bizarre.  If anybody has any advice please offer it.  I think SQL
> > is a dinosaur language and should be scraped, and RDBMS are obsolete
> > in view of cheap memory, faster processors, and managed pointers, but
> > that's another matter.
>
> In that case, should I even bother to answer?

To prove me wrong Erland! LOL.

> Memory may be cheap, but 1TB of memory is still quite expensive. And
> when power is turned off, not much remains of what once was there.
>
> And if you think SQL should be scrapped, permit me to point out that
> most SQL queries written in C++ or C# would require a lot more code.
> And, what worse is, the code would not be able to adapt to changes in
> data distribution or addition of new indexes. The point with SQL is
> that you say what result you want - the optimizer finds out the best
> way to compute that result.

No, just dump your data into a flat file, use a hash table to index
every word in said file, and use hardware (CAM memory) to do a fast
search.  Simple really.  Probably what Google does right now. You
think they are using SQL "queries" to do fast searches?  No way Jose.

>
> As for your problem, you appear to be working with some graphical tools
> that I have little experience of. (And I don't have much faith in.)

Correct.  You are "right as rain" on this point.  I do think the C#
feature of VS2005 vis-a-vis the SQL dB aspect of it is broken--maybe
because I'm using the SQL Server 2005 Express (free) edition (just
upgraded for a mere $50 to the Developer's edition--I'll repost here
if it solves this small problem).


> But
> there is an apparent misconception:
>
> > But somehow, when using C# rather than C++, I get this error when
> > attempting to create a relationship between tables when using two
> > identical columns called "AuthorID":  "the columns in table Authors do
> > not match an existing primary key or UNIQUE constraint" Why?  The
> > columns are the same. I even tried (and this made no difference)
> > copying and pasting columns from one table to the other, but still I
> > get this error.
> >...
> > One thing (that shouldn't matter, since it didn't matter when I used C+
> > + rather than C#):  the column in table "Author" is a primary key,
> > while it is a "foreign key" in table "Content".  But that is
> > irrelevant when creating a relationship, no?  
>
> No, that is the essence of a relationship. Or rather that is the
> relationship.

But it should (I would think) be irrelevant to the particular bug
here, since the columns are identical.  I even tried making both
columns "primary keys" but this did not solve the bug, which proves my
point.  Sorry if I was not clear but talking about computer problems
in any language is ackward. [UPDATE:  see my "revelation" below]

>
> In a relational database, all tables should have a primary key which
> uniquely identifies the data. Ideally, the PK should be drawn from
> the data itself. For instance, in a table of countries, the PK should
> be the country code according to ISO 8601. Often, though, it is more
> convenient to introduce a surrogate key, that is a system-assigned
> number to each row. Care should still be take to avoid duplicates,
> that is, Mark Twain should only appear once in a table. An important
> thing to point out about primary keys is that a PK can consist of more
> than one column.

Yes, this is theory.  I agree.

>
> Now, we have table called Authors. Somewhere else we also have a table
> called Books, in which the PK would typically be the ISBN. Of course,
> when we have a book, we want to know the author. So in the Books
> table is there an Authors column? Nah, a book can have several authors
> so that is not really a good idea. Instead there is a table AuthorBooks
> which have two columns AuthorID, ISBN. This table states which authors
> what wrote which books. The primary key of this table is (AuthorID, ISBN).
> But the columns AuthorID and ISBN are also foreign keys in this table.
> Not together, but on their own. AuthorID refers to Authors and ISBN
> refers to Books. The foreign key is constraint: you cannot add an
> AuthorID in AuthorBooks if this AuthorID does not exist in Authors.

Yes, but it seems you are advocating a THIRD table "instead there is a
table AuthorBooks".  Why, just to link the two tables Authors and
Books?  Why not simply add a foreign key in Books called "AuthorID",
that is the primary key in Authors? (This is exactly what my Frasier
book proposes).  But I suppose you can have a third "intermediate"
table, to tie two tables together--no reason why you cannot--but it
just doesn't sound "simple".  No big deal though.

>
> The best way to learn SQL and relational databases is not through
> graphical tools that do things behind the scenes, and far from always
> have gotten things right. Instead learn the proper commands and get
> a book which is focused on learning relational databases.

OK.  Tell me the book to buy then!  Here is what I've already bought:
Andrew Brust et al "Programming SQL Server 2005" (too specific on T-
SQL enhancements, on ADO.NET, on XML data, etc); Frasier book
(excellent) Pro Visual C++ (has a chapter on SQL programing via VS--
but a mere dozen pages--however his examples whetted my appetite for
SQL); Alan Beaulieu "Learning SQL" (O'Reilly Press--too focused on
MySQL, rather than T-SQL which Microsoft uses, so I can't 'compile'
his examples, but I do read his book and try and replicate his
examples in VS2005); Morrison "Creating SQL Server 2005 Applications
with Visual Studio" (book is just like the title says--the authors use
Visual Basic as the glue language, which I don't like, but you can
follow along. There's an emphasis on using the VS IDE to create
queries, etc).  I also just ordered these books: Beginning C# 2005
Databases (Programmer to Programmer)[probably the same as the Morrison
et al book on VB, but using C#]; Sams Teach Yourself SQL in 10
Minutes, Third Edition [might be the most useful book?]

[My REVELATION] BTW, as I typed this, I just realised that creating a
"relationship", as you said above, is simply creating primary and
foreign keys and having a foreign key in one table that is a primary
key in another.  Since I've done this already using the "Properties"
tab of tables "Author" and "Content", then I don't really need the GUI
"Wizard" that seems to be broke in C#/VS2005, do I?  So the "bug" I
describe in this thread is a minor bug, a trivial bug, for those
people too lazy to designate a "primary" and "foreign" key, manually.

Thanks Erland--your name sounds familiar... ah yes, you helped me with
a connection problem I had earlier this year (when I was using C++ as
the language of choice):  http://tinyurl.com/384rjt

Cheers,

Ray

0
raylopez99 (937)
7/15/2007 11:32:13 AM
On Jul 15, 7:32 am, raylopez99 <raylope...@yahoo.com> wrote:
> On Jul 15, 2:32 am, Erland Sommarskog <esq...@sommarskog.se> wrote:
>
> > raylopez99(raylope...@yahoo.com) writes:
> > > Very bizarre.  If anybody has any advice please offer it.  I think SQL
> > > is a dinosaur language and should be scraped, and RDBMS are obsolete
> > > in view of cheap memory, faster processors, and managed pointers, but
> > > that's another matter.
>
> > In that case, should I even bother to answer?
>
> To prove me wrong Erland! LOL.
>
> > Memory may be cheap, but 1TB of memory is still quite expensive. And
> > when power is turned off, not much remains of what once was there.
>
> > And if you think SQL should be scrapped, permit me to point out that
> > most SQL queries written in C++ or C# would require a lot more code.
> > And, what worse is, the code would not be able to adapt to changes in
> > data distribution or addition of new indexes. The point with SQL is
> > that you say what result you want - the optimizer finds out the best
> > way to compute that result.
>
> No, just dump your data into a flat file, use a hash table to index
> every word in said file, and use hardware (CAM memory) to do a fast
> search.  Simple really.  Probably what Google does right now. You
> think they are using SQL "queries" to do fast searches?  No way Jose.
>
>
>
> > As for your problem, you appear to be working with some graphical tools
> > that I have little experience of. (And I don't have much faith in.)
>
> Correct.  You are "right as rain" on this point.  I do think the C#
> feature of VS2005 vis-a-vis the SQL dB aspect of it is broken--maybe
> because I'm using the SQL Server 2005 Express (free) edition (just
> upgraded for a mere $50 to the Developer's edition--I'll repost here
> if it solves this small problem).
>
>
>
>
>
> > But
> > there is an apparent misconception:
>
> > > But somehow, when using C# rather than C++, I get this error when
> > > attempting to create a relationship between tables when using two
> > > identical columns called "AuthorID":  "the columns in table Authors do
> > > not match an existing primary key or UNIQUE constraint" Why?  The
> > > columns are the same. I even tried (and this made no difference)
> > > copying and pasting columns from one table to the other, but still I
> > > get this error.
> > >...
> > > One thing (that shouldn't matter, since it didn't matter when I used C+
> > > + rather than C#):  the column in table "Author" is a primary key,
> > > while it is a "foreign key" in table "Content".  But that is
> > > irrelevant when creating a relationship, no?  
>
> > No, that is the essence of a relationship. Or rather that is the
> > relationship.
>
> But it should (I would think) be irrelevant to the particular bug
> here, since the columns are identical.  I even tried making both
> columns "primary keys" but this did not solve the bug, which proves my
> point.  Sorry if I was not clear but talking about computer problems
> in any language is ackward. [UPDATE:  see my "revelation" below]
>
>
>
> > In a relational database, all tables should have a primary key which
> > uniquely identifies the data. Ideally, the PK should be drawn from
> > the data itself. For instance, in a table of countries, the PK should
> > be the country code according to ISO 8601. Often, though, it is more
> > convenient to introduce a surrogate key, that is a system-assigned
> > number to each row. Care should still be take to avoid duplicates,
> > that is, Mark Twain should only appear once in a table. An important
> > thing to point out about primary keys is that a PK can consist of more
> > than one column.
>
> Yes, this is theory.  I agree.
>
>
>
> > Now, we have table called Authors. Somewhere else we also have a table
> > called Books, in which the PK would typically be the ISBN. Of course,
> > when we have a book, we want to know the author. So in the Books
> > table is there an Authors column? Nah, a book can have several authors
> > so that is not really a good idea. Instead there is a table AuthorBooks
> > which have two columns AuthorID, ISBN. This table states which authors
> > what wrote which books. The primary key of this table is (AuthorID, ISBN).
> > But the columns AuthorID and ISBN are also foreign keys in this table.
> > Not together, but on their own. AuthorID refers to Authors and ISBN
> > refers to Books. The foreign key is constraint: you cannot add an
> > AuthorID in AuthorBooks if this AuthorID does not exist in Authors.
>
> Yes, but it seems you are advocating a THIRD table "instead there is a
> table AuthorBooks".  Why, just to link the two tables Authors and
> Books?  Why not simply add a foreign key in Books called "AuthorID",
> that is the primary key in Authors? (This is exactly what my Frasier
> book proposes).  But I suppose you can have a third "intermediate"
> table, to tie two tables together--no reason why you cannot--but it
> just doesn't sound "simple".  No big deal though.
>
>
>
> > The best way to learn SQL and relational databases is not through
> > graphical tools that do things behind the scenes, and far from always
> > have gotten things right. Instead learn the proper commands and get
> > a book which is focused on learning relational databases.
>
> OK.  Tell me the book to buy then!  Here is what I've already bought:
> Andrew Brust et al "Programming SQL Server 2005" (too specific on T-
> SQL enhancements, on ADO.NET, on XML data, etc); Frasier book
> (excellent) Pro Visual C++ (has a chapter on SQL programing via VS--
> but a mere dozen pages--however his examples whetted my appetite for
> SQL); Alan Beaulieu "Learning SQL" (O'Reilly Press--too focused on
> MySQL, rather than T-SQL which Microsoft uses, so I can't 'compile'
> his examples, but I do read his book and try and replicate his
> examples in VS2005); Morrison "Creating SQL Server 2005 Applications
> with Visual Studio" (book is just like the title says--the authors use
> Visual Basic as the glue language, which I don't like, but you can
> follow along. There's an emphasis on using the VS IDE to create
> queries, etc).  I also just ordered these books: Beginning C# 2005
> Databases (Programmer to Programmer)[probably the same as the Morrison
> et al book on VB, but using C#]; Sams Teach Yourself SQL in 10
> Minutes, Third Edition [might be the most useful book?]
>
> [My REVELATION] BTW, as I typed this, I just realised that creating a
> "relationship", as you said above, is simply creating primary and
> foreign keys and having a foreign key in one table that is a primary
> key in another.  Since I've done this already using the "Properties"
> tab of tables "Author" and "Content", then I don't really need the GUI
> "Wizard" that seems to be broke in C#/VS2005, do I?  So the "bug" I
> describe in this thread is a minor bug, a trivial bug, for those
> people too lazy to designate a "primary" and "foreign" key, manually.
>
> Thanks Erland--your name sounds familiar... ah yes, you helped me with
> a connection problem I had earlier this year (when I was using C++ as
> the language of choice):  http://tinyurl.com/384rjt
>
> Cheers,
>
> Ray- Hide quoted text -
>
> - Show quoted text -

>>No, just dump your data into a flat file, use a hash table to index
every word in said file, and use hardware (CAM memory) to do a fast
search.  Simple really.  Probably what Google does right now. You
think they are using SQL "queries" to do fast searches?  No way Jose.

My db is 300GB+, how much memory do you need to index every word? I
assume you won't have you app runing on 100+ (more like 100000)
servers like Google does. What happens when someone does an updae/
delete/insert? How would you handle that. waiting for the spider to
come along? Yes SQL is not perfect but right now there is nothing
better  ;-(
Just waiting on those solid state hard drive to come in bigger
sizes  ;-)


Denis The SQL Menace
http://sqlservercode.blogspot.com
http://sqlblog.com/blogs/denis_gobo/default.aspx

0
denis.gobo (83)
7/15/2007 11:55:16 AM
On Jul 15, 7:32 am, raylopez99 <raylope...@yahoo.com> wrote:
> On Jul 15, 2:32 am, Erland Sommarskog <esq...@sommarskog.se> wrote:
>
> > raylopez99(raylope...@yahoo.com) writes:
> > > Very bizarre.  If anybody has any advice please offer it.  I think SQL
> > > is a dinosaur language and should be scraped, and RDBMS are obsolete
> > > in view of cheap memory, faster processors, and managed pointers, but
> > > that's another matter.
>
> > In that case, should I even bother to answer?
>
> To prove me wrong Erland! LOL.
>
> > Memory may be cheap, but 1TB of memory is still quite expensive. And
> > when power is turned off, not much remains of what once was there.
>
> > And if you think SQL should be scrapped, permit me to point out that
> > most SQL queries written in C++ or C# would require a lot more code.
> > And, what worse is, the code would not be able to adapt to changes in
> > data distribution or addition of new indexes. The point with SQL is
> > that you say what result you want - the optimizer finds out the best
> > way to compute that result.
>
> No, just dump your data into a flat file, use a hash table to index
> every word in said file, and use hardware (CAM memory) to do a fast
> search.  Simple really.  Probably what Google does right now. You
> think they are using SQL "queries" to do fast searches?  No way Jose.
>
>
>
> > As for your problem, you appear to be working with some graphical tools
> > that I have little experience of. (And I don't have much faith in.)
>
> Correct.  You are "right as rain" on this point.  I do think the C#
> feature of VS2005 vis-a-vis the SQL dB aspect of it is broken--maybe
> because I'm using the SQL Server 2005 Express (free) edition (just
> upgraded for a mere $50 to the Developer's edition--I'll repost here
> if it solves this small problem).
>
>
>
>
>
> > But
> > there is an apparent misconception:
>
> > > But somehow, when using C# rather than C++, I get this error when
> > > attempting to create a relationship between tables when using two
> > > identical columns called "AuthorID":  "the columns in table Authors do
> > > not match an existing primary key or UNIQUE constraint" Why?  The
> > > columns are the same. I even tried (and this made no difference)
> > > copying and pasting columns from one table to the other, but still I
> > > get this error.
> > >...
> > > One thing (that shouldn't matter, since it didn't matter when I used C+
> > > + rather than C#):  the column in table "Author" is a primary key,
> > > while it is a "foreign key" in table "Content".  But that is
> > > irrelevant when creating a relationship, no?  
>
> > No, that is the essence of a relationship. Or rather that is the
> > relationship.
>
> But it should (I would think) be irrelevant to the particular bug
> here, since the columns are identical.  I even tried making both
> columns "primary keys" but this did not solve the bug, which proves my
> point.  Sorry if I was not clear but talking about computer problems
> in any language is ackward. [UPDATE:  see my "revelation" below]
>
>
>
> > In a relational database, all tables should have a primary key which
> > uniquely identifies the data. Ideally, the PK should be drawn from
> > the data itself. For instance, in a table of countries, the PK should
> > be the country code according to ISO 8601. Often, though, it is more
> > convenient to introduce a surrogate key, that is a system-assigned
> > number to each row. Care should still be take to avoid duplicates,
> > that is, Mark Twain should only appear once in a table. An important
> > thing to point out about primary keys is that a PK can consist of more
> > than one column.
>
> Yes, this is theory.  I agree.
>
>
>
> > Now, we have table called Authors. Somewhere else we also have a table
> > called Books, in which the PK would typically be the ISBN. Of course,
> > when we have a book, we want to know the author. So in the Books
> > table is there an Authors column? Nah, a book can have several authors
> > so that is not really a good idea. Instead there is a table AuthorBooks
> > which have two columns AuthorID, ISBN. This table states which authors
> > what wrote which books. The primary key of this table is (AuthorID, ISBN).
> > But the columns AuthorID and ISBN are also foreign keys in this table.
> > Not together, but on their own. AuthorID refers to Authors and ISBN
> > refers to Books. The foreign key is constraint: you cannot add an
> > AuthorID in AuthorBooks if this AuthorID does not exist in Authors.
>
> Yes, but it seems you are advocating a THIRD table "instead there is a
> table AuthorBooks".  Why, just to link the two tables Authors and
> Books?  Why not simply add a foreign key in Books called "AuthorID",
> that is the primary key in Authors? (This is exactly what my Frasier
> book proposes).  But I suppose you can have a third "intermediate"
> table, to tie two tables together--no reason why you cannot--but it
> just doesn't sound "simple".  No big deal though.
>
>
>
> > The best way to learn SQL and relational databases is not through
> > graphical tools that do things behind the scenes, and far from always
> > have gotten things right. Instead learn the proper commands and get
> > a book which is focused on learning relational databases.
>
> OK.  Tell me the book to buy then!  Here is what I've already bought:
> Andrew Brust et al "Programming SQL Server 2005" (too specific on T-
> SQL enhancements, on ADO.NET, on XML data, etc); Frasier book
> (excellent) Pro Visual C++ (has a chapter on SQL programing via VS--
> but a mere dozen pages--however his examples whetted my appetite for
> SQL); Alan Beaulieu "Learning SQL" (O'Reilly Press--too focused on
> MySQL, rather than T-SQL which Microsoft uses, so I can't 'compile'
> his examples, but I do read his book and try and replicate his
> examples in VS2005); Morrison "Creating SQL Server 2005 Applications
> with Visual Studio" (book is just like the title says--the authors use
> Visual Basic as the glue language, which I don't like, but you can
> follow along. There's an emphasis on using the VS IDE to create
> queries, etc).  I also just ordered these books: Beginning C# 2005
> Databases (Programmer to Programmer)[probably the same as the Morrison
> et al book on VB, but using C#]; Sams Teach Yourself SQL in 10
> Minutes, Third Edition [might be the most useful book?]
>
> [My REVELATION] BTW, as I typed this, I just realised that creating a
> "relationship", as you said above, is simply creating primary and
> foreign keys and having a foreign key in one table that is a primary
> key in another.  Since I've done this already using the "Properties"
> tab of tables "Author" and "Content", then I don't really need the GUI
> "Wizard" that seems to be broke in C#/VS2005, do I?  So the "bug" I
> describe in this thread is a minor bug, a trivial bug, for those
> people too lazy to designate a "primary" and "foreign" key, manually.
>
> Thanks Erland--your name sounds familiar... ah yes, you helped me with
> a connection problem I had earlier this year (when I was using C++ as
> the language of choice):  http://tinyurl.com/384rjt
>
> Cheers,
>
> Ray- Hide quoted text -
>
> - Show quoted text -

Also how will you do one of these queries
return all customers who live in LA and have ordered product x between
June 2006 and July 2007

Denis The SQL Menace
http://sqlservercode.blogspot.com
http://sqlblog.com/blogs/denis_gobo/default.aspx

0
denis.gobo (83)
7/15/2007 12:15:10 PM
raylopez99 wrote: [snip]
> > > I think SQL
> > > is a dinosaur language and should be scraped
> >
> > I agree that SQL is well overdue for replacement. Unfortunately it
> > can't happen overnight.
> >
> > > RDBMS are obsolete
> > > in view of cheap memory, faster processors, and managed pointers, but
> > > that's another matter.
> >
> > That's not a coherent line of reasoning at all. An RDBMS implements a
> > logical model of data that is independent of the hardware used. If
> > processors and memory are faster and cheaper then that makes RDBMS
> > faster and cheaper too. What do you propose as an alternative?
> >
> 
> Pointers indexing, using CAM and a Hash Table, one big, fat, flat
> file.  THe old way of doing things before IBM researcher E.M. Codd
> proposed RDBMS in 1970.
[snip]

It is a known fact that for some applications the (current?) RDBMS is
not the optimal technology. In areas such as text search and scientific
applications (which require confidence percentages) other techniques are
significantly faster, even a few orders of magnitude faster. So
development in these areas is to be expected, and I am looking forward
to that wave of innovation.

On the other hand, an RDBMS is very good in the field of transactions
and as a generic nonspecific technology. For many fields of use the
RDBMS is and will be the most efficient solution. It is nowhere near
obsolete and is unlikely to be obsolete for at least another decade.

Gert-Jan
0
sorry5712 (225)
7/15/2007 12:27:57 PM
On Jul 15, 5:15 am, SQL Menace <denis.g...@gmail.com> wrote:

> Also how will you do one of these queries
> return all customers who live in LA and have ordered product x between
> June 2006 and July 2007
>
> Denis The SQL Menace

You wrote your dB is 300 GB--but how much of this is raw data?
Perhaps 10%?  The other 90% is junk to link the data (I'm guessing).
If so, you can buy 30 GB of RAM and when x86/Windows supports 64 bit
better (though I think they already do--Itanium?) you can access this
30 GB with no problem.  30 GB RAM costs about $1000.  Not expensive.

As for the query: " > return all customers who live in LA and have
ordered product x between > June 2006 and July 2007"

This is simple using a flat file as I propose:  "customers & (order*
or purchase or buy*) & product x & (DATE T (June 2006 < T < July
2007))".  Suitable code can be written to make these Boolean operators
work.  Some cleanup might be required to strip out false hits, but
these false hits are present in regular databases today.  Also
remember right now with RDBMS you expend a tremendous amount of work
putting data into "orthogonal" databases, via data entry forms.  You
have to pay people to enter the data correctly (even if your program
rejects bad data entry, you still have to pay people to enter the data
correctly).  You can avoid all of this with a flat file.  Just dump
the raw data into memory and let an inference or search engine index
the data and make the associations via pointers.

RL




0
raylopez99 (937)
7/15/2007 12:34:18 PM
On 15 Jul, 13:34, raylopez99 <raylope...@yahoo.com> wrote:
> Also
> remember right now with RDBMS you expend a tremendous amount of work
> putting data into "orthogonal" databases, via data entry forms.  You
> have to pay people to enter the data correctly (even if your program
> rejects bad data entry, you still have to pay people to enter the data
> correctly).  You can avoid all of this with a flat file.  Just dump
> the raw data into memory and let an inference or search engine index
> the data and make the associations via pointers.
>

So your only alternative is to disregard data integrity? I hope you
won't ever be tempted to try that out in anything other than toy
experiments. Consider finance, medical, military or safety systems for
example. Or systems that preserve confidential personal or legal
information. Or e-commerce. Do you really think integrity features are
unnecessary or unimportant enough to discard? I assume you do not, but
it's far from clear just how "dump the raw data" makes it easier or
cheaper or faster to guarantee data integrity.

--
David Portas, SQL Server MVP

Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.

SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--

0
7/15/2007 12:46:01 PM
On Jul 15, 8:34 am, raylopez99 <raylope...@yahoo.com> wrote:
> On Jul 15, 5:15 am, SQL Menace <denis.g...@gmail.com> wrote:
>
> > Also how will you do one of these queries
> > return all customers who live in LA and have ordered product x between
> > June 2006 and July 2007
>
> > Denis The SQL Menace
>
> You wrote your dB is 300 GB--but how much of this is raw data?
> Perhaps 10%?  The other 90% is junk to link the data (I'm guessing).
> If so, you can buy 30 GB of RAM and when x86/Windows supports 64 bit
> better (though I think they already do--Itanium?) you can access this
> 30 GB with no problem.  30 GB RAM costs about $1000.  Not expensive.
>
> As for the query: " > return all customers who live in LA and have
> ordered product x between > June 2006 and July 2007"
>
> This is simple using a flat file as I propose:  "customers & (order*
> or purchase or buy*) & product x & (DATE T (June 2006 < T < July
> 2007))".  Suitable code can be written to make these Boolean operators
> work.  Some cleanup might be required to strip out false hits, but
> these false hits are present in regular databases today.  Also
> remember right now with RDBMS you expend a tremendous amount of work
> putting data into "orthogonal" databases, via data entry forms.  You
> have to pay people to enter the data correctly (even if your program
> rejects bad data entry, you still have to pay people to enter the data
> correctly).  You can avoid all of this with a flat file.  Just dump
> the raw data into memory and let an inference or search engine index
> the data and make the associations via pointers.
>
> RL

>>You wrote your dB is 300 GB--but how much of this is raw data?
Perhaps 10%?  The other 90% is junk to link the data (I'm guessing).
If so, you can buy 30 GB of RAM and when x86/Windows supports 64 bit
better (though I think they already do--Itanium?) you can access this
30 GB with no problem.  30 GB RAM costs about $1000.  Not expensive.

No junk my friend, the data goes back to May 1896 so yes it is 300GB,
the junk (as you call it) is maybe  10 MB (probably a lot less, these
are the lookup tables)
BTW this is just 1 DB We have several of these on the server

Now for a more interesting question. Ever heard of SOX? How would you
do your audit trail?


Denis The SQL Menace
http://sqlservercode.blogspot.com
http://sqlblog.com/blogs/denis_gobo/default.aspx

0
denis.gobo (83)
7/15/2007 1:06:12 PM
On Jul 15, 8:34 am, raylopez99 <raylope...@yahoo.com> wrote:
> On Jul 15, 5:15 am, SQL Menace <denis.g...@gmail.com> wrote:
>
> > Also how will you do one of these queries
> > return all customers who live in LA and have ordered product x between
> > June 2006 and July 2007
>
> > Denis The SQL Menace
>
> You wrote your dB is 300 GB--but how much of this is raw data?
> Perhaps 10%?  The other 90% is junk to link the data (I'm guessing).
> If so, you can buy 30 GB of RAM and when x86/Windows supports 64 bit
> better (though I think they already do--Itanium?) you can access this
> 30 GB with no problem.  30 GB RAM costs about $1000.  Not expensive.
>
> As for the query: " > return all customers who live in LA and have
> ordered product x between > June 2006 and July 2007"
>
> This is simple using a flat file as I propose:  "customers & (order*
> or purchase or buy*) & product x & (DATE T (June 2006 < T < July
> 2007))".  Suitable code can be written to make these Boolean operators
> work.  Some cleanup might be required to strip out false hits, but
> these false hits are present in regular databases today.  Also
> remember right now with RDBMS you expend a tremendous amount of work
> putting data into "orthogonal" databases, via data entry forms.  You
> have to pay people to enter the data correctly (even if your program
> rejects bad data entry, you still have to pay people to enter the data
> correctly).  You can avoid all of this with a flat file.  Just dump
> the raw data into memory and let an inference or search engine index
> the data and make the associations via pointers.
>
> RL

>>Also
> remember right now with RDBMS you expend a tremendous amount of work
> putting data into "orthogonal" databases, via data entry forms

Why, I 'enter' almost all of my data with BCP or BULK INSERT

>>You can avoid all of this with a flat file.  Just dump
> the raw data into memory

What about bad data? where are your check constraints?

>>Just dump
> the raw data into memory

Okay I have to import sometimes files that have 100,200, 300 million
rows, they contain cusips, isins, total return, price return (so data
which is decimal) how are you going to import this? BTW they can be in
a variety of formats of course
How will you dump these into memory? You have 300GB of memory on your
server? I don't

>>You wrote your dB is 300 GB--but how much of this is raw data?
> Perhaps 10%?  The other 90% is junk to link the data (I'm guessing).

It is all real data, My data goes back to May 1896, we are adding Gigs
as we speak (automated jobs)

Denis The SQL Menace
http://sqlservercode.blogspot.com
http://sqlblog.com/blogs/denis_gobo/default.aspx

0
denis.gobo (83)
7/15/2007 1:07:46 PM
On Jul 15, 8:34 am, raylopez99 <raylope...@yahoo.com> wrote:
> On Jul 15, 5:15 am, SQL Menace <denis.g...@gmail.com> wrote:
>
> > Also how will you do one of these queries
> > return all customers who live in LA and have ordered product x between
> > June 2006 and July 2007
>
> > Denis The SQL Menace
>
> You wrote your dB is 300 GB--but how much of this is raw data?
> Perhaps 10%?  The other 90% is junk to link the data (I'm guessing).
> If so, you can buy 30 GB of RAM and when x86/Windows supports 64 bit
> better (though I think they already do--Itanium?) you can access this
> 30 GB with no problem.  30 GB RAM costs about $1000.  Not expensive.
>
> As for the query: " > return all customers who live in LA and have
> ordered product x between > June 2006 and July 2007"
>
> This is simple using a flat file as I propose:  "customers & (order*
> or purchase or buy*) & product x & (DATE T (June 2006 < T < July
> 2007))".  Suitable code can be written to make these Boolean operators
> work.  Some cleanup might be required to strip out false hits, but
> these false hits are present in regular databases today.  Also
> remember right now with RDBMS you expend a tremendous amount of work
> putting data into "orthogonal" databases, via data entry forms.  You
> have to pay people to enter the data correctly (even if your program
> rejects bad data entry, you still have to pay people to enter the data
> correctly).  You can avoid all of this with a flat file.  Just dump
> the raw data into memory and let an inference or search engine index
> the data and make the associations via pointers.
>
> RL

One more thing (I sound like Steve Jobs don't I?)
Let's say your site is Digg.com, Digg has over a million users, they
use about 60 webservers
Right now the webservers talk to the DB and they all see the same data
How would you accomplish that with your memory solution?

You also asked for books here are 3 I recommend

Inside Microsoft SQL Server 2005: T-SQL Querying
http://www.amazon.com/gp/product/0735623139/102-1687136-8876917?ie=UTF8&tag=sql08-20&linkCode=xm2&camp=1789&creativeASIN=0735623139

Pro SQL Server 2005 Database Design and Optimization
http://www.amazon.com/gp/product/1590595297/102-1687136-8876917?ie=UTF8&tag=sql08-20&linkCode=xm2&camp=1789&creativeASIN=1590595297

The Guru's Guide to Transact-SQL (SQL server 2000)
http://www.amazon.com/gp/product/0201615762/102-1687136-8876917?ie=UTF8&tag=sql08-20&linkCode=xm2&camp=1789&creativeASIN=0201615762



Denis The SQL Menace
http://sqlservercode.blogspot.com
http://sqlblog.com/blogs/denis_gobo/default.aspx

0
denis.gobo (83)
7/15/2007 1:27:54 PM
On Jul 15, 6:07 am, SQL Menace <denis.g...@gmail.com> wrote:

[SQL Menace mentions his raw data is close to 300 GB, not 30 GB, and
300 GB memory is expensive]

OK, but even if your raw data is 300 GB rather than 30 GB then it will
only cost about USD $10k to buy this memory--and it's cheaper than
paying Oracle for license fees every year to lease their software.

> >>Also
> > remember right now with RDBMS you expend a tremendous amount of work
> > putting data into "orthogonal" databases, via data entry forms
>
> Why, I 'enter' almost all of my data with BCP or BULK INSERT

You're perhaps one of the lucky few.  Most people have to enter the
data manually somewhere.

>
> >>You can avoid all of this with a flat file.  Just dump
> > the raw data into memory
>
> What about bad data? where are your check constraints?

They'll be filtered by the inference engine, magically.

The future is now Dennis.   There's no use fighting the machines; they
will eventually prevail and put you out of your job.

RL


0
raylopez99 (937)
7/15/2007 1:28:23 PM
On Jul 15, 9:28 am, raylopez99 <raylope...@yahoo.com> wrote:
> On Jul 15, 6:07 am, SQL Menace <denis.g...@gmail.com> wrote:
>
> [SQL Menace mentions his raw data is close to 300 GB, not 30 GB, and
> 300 GB memory is expensive]
>
> OK, but even if your raw data is 300 GB rather than 30 GB then it will
> only cost about USD $10k to buy this memory--and it's cheaper than
> paying Oracle for license fees every year to lease their software.
>
> > >>Also
> > > remember right now with RDBMS you expend a tremendous amount of work
> > > putting data into "orthogonal" databases, via data entry forms
>
> > Why, I 'enter' almost all of my data with BCP or BULK INSERT
>
> You're perhaps one of the lucky few.  Most people have to enter the
> data manually somewhere.
>
>
>
> > >>You can avoid all of this with a flat file.  Just dump
> > > the raw data into memory
>
> > What about bad data? where are your check constraints?
>
> They'll be filtered by the inference engine, magically.
>
> The future is now Dennis.   There's no use fighting the machines; they
> will eventually prevail and put you out of your job.
>
> RL

>>The future is now Dennis.   There's no use fighting the machines; they
will eventually prevail and put you out of your job.

I will be dead by then  ;-)

And of course if something better comes along then I will use that


Denis The SQL Menace
http://sqlservercode.blogspot.com
http://sqlblog.com/blogs/denis_gobo/default.aspx

0
denis.gobo (83)
7/15/2007 1:32:58 PM
On Jul 15, 6:32 am, SQL Menace <denis.g...@gmail.com> wrote:
> On Jul 15, 9:28 am, raylopez99 <raylope...@yahoo.com> wrote:

> >>The future is now Dennis.   There's no use fighting the machines; they
>
> will eventually prevail and put you out of your job.
>
> I will be dead by then  ;-)
>
> And of course if something better comes along then I will use that
>
> Denis The SQL Menace

Yes, you make an implicit good point: legacy applications is why SQL
will live a long time.  Like Cobol.  Also the hardware is not yet
cheap enough for what I propose, using pointers and CAM.  Hardware is
the same reason parallel multi-thread programming is not yet popular--
but with Intel pushing multiple cores, this should change in the
future.

I myself am learning SQL just for fun--call me crazy.

RL


0
raylopez99 (937)
7/15/2007 2:23:06 PM
On Jul 15, 10:23 am, raylopez99 <raylope...@yahoo.com> wrote:
> On Jul 15, 6:32 am, SQL Menace <denis.g...@gmail.com> wrote:
>
> > On Jul 15, 9:28 am, raylopez99 <raylope...@yahoo.com> wrote:
> > >>The future is now Dennis.   There's no use fighting the machines; they
>
> > will eventually prevail and put you out of your job.
>
> > I will be dead by then  ;-)
>
> > And of course if something better comes along then I will use that
>
> > Denis The SQL Menace
>
> Yes, you make an implicit good point: legacy applications is why SQL
> will live a long time.  Like Cobol.  Also the hardware is not yet
> cheap enough for what I propose, using pointers and CAM.  Hardware is
> the same reason parallel multi-thread programming is not yet popular--
> but with Intel pushing multiple cores, this should change in the
> future.
>
> I myself am learning SQL just for fun--call me crazy.
>
> RL


>>parallel multi-thread programming is not yet popular

It is on SQL Server  and has been for a long time  ;-)


Denis The SQL Menace
http://sqlservercode.blogspot.com
http://sqlblog.com/blogs/denis_gobo/default.aspx

0
denis.gobo (83)
7/15/2007 2:25:16 PM
SQL Menace (denis.gobo@gmail.com) writes:
> You also asked for books here are 3 I recommend
> 
> Inside Microsoft SQL Server 2005: T-SQL Querying

In all fairness, this book is not intended for beginners, and it's
quite clear from Ray's posts, that he needs an introductory book on
relational databases. While this books is a good read, it's intended
for persons who already has experience of SQL, with SQL Server or
some other product.

> Pro SQL Server 2005 Database Design and Optimization

This is a better choice, as so far that Louis Davidson covers database
design more than most other SQL books do.

Myself, I cannot recommend any good beginner's books, since it was some
time ago I had reason to read one. 


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
7/15/2007 3:46:49 PM
raylopez99 (raylopez99@yahoo.com) writes:
> No, just dump your data into a flat file, use a hash table to index
> every word in said file, and use hardware (CAM memory) to do a fast
> search.  Simple really.  Probably what Google does right now. You
> think they are using SQL "queries" to do fast searches?  No way Jose.

It's not implausible that Google does something like that. After all,
they have a simple data model, but a lot of data. Use the right tool
for right thing.

But not everything is suitable for flat files. The system I work with
is system for stock brokers and asset managers. We keep track of what
their customers buy and sell, their positions, their cash holdings.
We keep a lot of data about the various financial instruments and what
happens to them. Say that you have 6000 HP stocks in your depot. What is
the acquisition value of that position? To be able to calculate that
we need to know that that this positions is the result of buy HP, 
Compaq and Digital along the way. We need report things to the tax
authorities, and levy source tax for dividends etc. And then there are
special accounts for pension-savings with their own set of rules. All
and all, there are over 1000 tables, and what is the most effecient
way to extract a desired set of data is far from always obvious.

And RDBMS is not only about retrieving data, but also adding data. 
How do you ensure in your flat file that there no transaction is
entered about an insrument that does not exist in the instrument
file? How do you ensure that if you add a withdrawal transaction is
to the transaction file, that the cash-holdings file is also updated?

> You wrote your dB is 300 GB--but how much of this is raw data?
> Perhaps 10%?  The other 90% is junk to link the data (I'm guessing).
> If so, you can buy 30 GB of RAM and when x86/Windows supports 64 bit
> better (though I think they already do--Itanium?) you can access this
> 30 GB with no problem.  30 GB RAM costs about $1000.  Not expensive.

The size of the database at our biggest customer is around 350 MB. Is
all that raw data? No. Is there some "junk" that no one ever looks at?
Probably. But we cannot really throw it away. All transactional data
must be kept for ten years, to comply with Swedish legislation. But
the really interesting part is that one reason the database is inflated,
is because we have huge historical tables with the positions and cash
holds for every day going back a couple of years. They are there to
provide faster access to the data.

And there is the problem with your 30 GB of memory. To be useful, 
you need more than 30 GB of memory. Your hash tables will easily take
up a couple of GB more. Keep in mind that you will need more than one
hash table for each file, because searches may be per different attributes.
And hash tables alone won't make it. A range query, like "give me the
sales for the first three months this year in Finland" is better served
with a sorted index. There you have more of the "junk" in a relational
database.

And in a database that is for a data warehouse, you will find even more
junk, as in such a database you preaggregate both this and that, and 
the database easily ends up several terabytes. (And in this case, the
language of choice is rather MDX than SQL.)

A lot of that "junk" serves to make the data access faster. And after,
while memory is cheap and processors get better, searching 30 GB even
in memory by brute force takes a long time. 

> Correct.  You are "right as rain" on this point.  I do think the C#
> feature of VS2005 vis-a-vis the SQL dB aspect of it is broken--maybe
> because I'm using the SQL Server 2005 Express (free) edition (just
> upgraded for a mere $50 to the Developer's edition--I'll repost here
> if it solves this small problem).

If you have problems with Visual Studio, you may be better of asking
in a Visual Studio group. Although, I will have to admit that I find
it difficult to believe that it matters which language you are using.
 
> Yes, but it seems you are advocating a THIRD table "instead there is a
> table AuthorBooks".  Why, just to link the two tables Authors and
> Books?  Why not simply add a foreign key in Books called "AuthorID",
> that is the primary key in Authors? (This is exactly what my Frasier
> book proposes).  

Because, as I said, a book can have many authors. This is a classical
database-design problem. What is the cardinality of the relations?
One-to-one? Many-to-one? Many-to-many? Authors and books is a classical
example of many-to-many. And one reason it is commonly used is that 
since most books only have one author, it is a good example where you
may initially overlook that it is a matter of a many-to-many relationship.
And the normal way of resolving a many-to-many relationship is to
add a third table. In its simplest form, it has only the foreign keys,
but you could add attributes to the relation, for instance each author's
share of the royalties.

> Also remember right now with RDBMS you expend a tremendous amount of
> work putting data into "orthogonal" databases, via data entry forms. 
> You have to pay people to enter the data correctly (even if your program 
> rejects bad data entry, you still have to pay people to enter the data 
> correctly).  You can avoid all of this with a flat file. 

Huh? You suggest that people instead of using entry forms, would edit
the flat files directly? Au contraire my friend, with a less intelligent
data store, it's even more essential that the data entry is done right.


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
7/15/2007 4:17:12 PM
On Jul 15, 9:17 am, Erland Sommarskog <esq...@sommarskog.se> wrote:

[stuff deleted]

Thanks Erland for critiquing SQL Menace's book "Pro SQL Server 2005
Database Design and Optimization ", which I will order from the three
he recommended.

As for CAM/Pointers architecture versus RDBMS architecture, I'm sure
some smart engineer could figure out how to make the former work.
They already use CAM architecture for routing internet packets, so I'm
sure they can figure out a way to do double entry accounting (that is,
update cash-holdings account when debiting withdraw, etc).

Rest of your points are well taken. It probably does take a lot of
time to search a 30 GB RAM memory, even with a quad core Intel uP.
Perhaps RDBMS will always be faster for real-time transactions, which
seems to be where they are used.

As for data entry, I was simply saying with RDBMS it seems you have to
expend effort in getting the right data into the right field (i.e.,
cannot enter a Char into a date field, etc).  So some "energy" has to
be expended to do this, moreso than a flat file, which can be more
free-flowing.  But, as the Second Law of Thermo (entropy) says, you
cannot get "high value" without expending energy (no pain, no gain).
So I suppose employing cheap Indian or Chinese or Swedish housewives
to do "data entry" is a small price to pay in order to get data in the
right format so you can use it efficiently.  The difference, if you
follow my analogy, between a rules based expert system and a more free
flowing "Knowledge Inference engine" that uses keywords.
http://en.wikipedia.org/wiki/Inference_engine.  Rules based engines
are easier to build, and they seem to run faster it seems to me.

RDBMS are more like rigid "rules based" expert systems while "my" (not
really mine, it's old) proposal is more like a free-flowing inference
engine.  Like another poster in this thread says, for scientific work
where you don't really know what the associations are between the data
(so you don't know what datum is orthogonal to another; and what a
good key is, etc) then "my" proposal makes more sense.

Anyway, thanks for replying and I hope to catch you again later, since
I'll have more questions that you can answer if you have the time.
Right now, I just ran my first stored procedure successfully
"manually", that is, from inside of VS2005 but without using C#/C++ as
the "glue"; I entered the parameters I wanted into the query, and it
worked (I was entering a new row in my Authors dB--which also gave
rise to a new question, kind of trivial, but why the new row is
automatically numbered with a successive number that does not repeat
even when you delete the new row later and run the query again... must
have something to do with a "seed" of some sort.. that is, after the
"3rd" row is entered, and I manually delete the row, the next time I
run the query the new row is not "3" but 4.  And if I delete again,
the next time the new row is 5, not "3".  And again, the autonumber
becomes 6, not "3", etc.  Not a big deal but I'm curious as to
why...must be some parameter that's akin to "consecutive renumbering"
for new rows).  I felt like successfully running "Hello World" in
programming!  Also, I installed the MSFT "AdventureWorks dB" sample
(the successor to "NorthWind dB" sample in SQL 2000).  If anybody
knows of a series of lessons using this "AdventureWorks" dB (my book
by Morrison says they occasionally will refer to it, but I was
thinking more like a dedicated series of lessons using this dB) let me
know (i'll also Google this later, but if these lessons are readily
available on the Net feel free to reply here).

Cheers

RL

0
raylopez99 (937)
7/15/2007 4:54:11 PM
On Jul 15, 9:54 am, raylopez99 <raylope...@yahoo.com> wrote:

Just to complete this thread I ordered these 3 books as well from
Amazon:  the Itzik Ben-gan 2 volume set is a doorstopper; around 600
pp each.  Note Itzik Ben-Gan et al has a seperate book for T-SQL
programming vs T-SQL queries.  Amazing that the authors can write so
much on what seems to be a simple concept you can cover in 100 pages I
would have imagined.

RL


Inside Microsoft SQL Server 2005: T-SQL Programming (Pro-Developer) -
Itzik Ben-gan; Paperback
In Stock

Inside Microsoft SQL Server 2005: T-SQL Querying (Solid Quality
Learning) - Itzik Ben-Gan; Paperback
In Stock

Beginning C# Databases: From Novice to Professional - James
Huddleston; Paperback
In Stock

0
raylopez99 (937)
7/15/2007 5:03:19 PM
SQL Menace wrote:

>> You wrote your dB is 300 GB--but how much of this is raw data?
>> Perhaps 10%?  The other 90% is junk to link the data (I'm guessing).
>> If so, you can buy 30 GB of RAM and when x86/Windows supports 64 bit
>> better (though I think they already do--Itanium?) you can access this
>> 30 GB with no problem.  30 GB RAM costs about $1000.  Not expensive.

> No junk my friend, the data goes back to May 1896
                                                 ^^
Color *me* impressed.  Have they got a version of MUMPS that
works with a SQL back-end now?  (God, I hope not; I've heard
horror stories about MUMPS.)

As for Ray Lopez, the reason to use an RDBMS rather than "one
big fat flat file" is the same reason to use an optimizing
compiler rather than hand-hacking assembler, i.e. in many
applications the small increase in execution time is outweighed
by a large decrease in development and maintenance time.  If
you need things like data integrity, indexes, and transactions
anyway, then why re-invent those wheels?
0
emurphy42 (1226)
7/15/2007 5:04:50 PM
raylopez99 (raylopez99@yahoo.com) writes:
> As for CAM/Pointers architecture versus RDBMS architecture, I'm sure
> some smart engineer could figure out how to make the former work.
> They already use CAM architecture for routing internet packets, so I'm
> sure they can figure out a way to do double entry accounting (that is,
> update cash-holdings account when debiting withdraw, etc).

As always, there are more than one way to skin the cat. There are more
possibilities to maintain data integrity than relational databases. But
it is a complex matter, and relational databases has proven to do a
decent job of it.
 
> Rest of your points are well taken. It probably does take a lot of
> time to search a 30 GB RAM memory, even with a quad core Intel uP.
> Perhaps RDBMS will always be faster for real-time transactions, which
> seems to be where they are used.

For true real-time, you may want to use something different, maybe in-
memory. But for all that happens after, all the analisys, the archiving,
the invesitgation of went wrong, trends, a relational database maybe
with an OLAP engine on top is the overall dominating solution today.

> As for data entry, I was simply saying with RDBMS it seems you have to
> expend effort in getting the right data into the right field (i.e.,
> cannot enter a Char into a date field, etc).  So some "energy" has to
> be expended to do this, moreso than a flat file, which can be more
> free-flowing.  

Nonsense. You can declare all columns in a RDBMS to be character if you
like. You can skip all validation. You can even skip having multiple
columns, but have a single wide text column for all data. Right data into 
the right field has nothing to do with relational databases, it has to do 
with business requirements. If you have an order, you don't want the 
customer and the employee to took the order to be mixed up, do you?

> So I suppose employing cheap Indian or Chinese or Swedish housewives
> to do "data entry" is a small price to pay in order to get data in the
> right format so you can use it efficiently.  

Actually a lot of data entry in many systems comes by loading data 
from other sources. For instance, in our system, the main bulk of the
transactions comes from the stock and option markets. Yet others are
created by the system itself from various rules. (E.g. interest 
capitalisation). I would not expect as much as 5% of the transactions
are manually entered.

> I entered the parameters I wanted into the query, and it
> worked (I was entering a new row in my Authors dB--which also gave
> rise to a new question, kind of trivial, but why the new row is
> automatically numbered with a successive number that does not repeat
> even when you delete the new row later and run the query again... must
> have something to do with a "seed" of some sort.. that is, after the
> "3rd" row is entered, and I manually delete the row, the next time I
> run the query the new row is not "3" but 4.  And if I delete again,
> the next time the new row is 5, not "3".  And again, the autonumber
> becomes 6, not "3", etc.  Not a big deal but I'm curious as to
> why...must be some parameter that's akin to "consecutive renumbering"
> for new rows).  

It appears that your table has the IDENTITY property, and it is an 
inherit characteristic of this property that it gives you gaps. If you
want consecutive numbers, you should generate your surrogate keys yourself.

The reason IDENTITY gives you gaps, has nothing to do with relational
theory per se, but it's all about concurrency. The design permits very
many concurrent inserts being carried out. 

-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
7/15/2007 5:52:11 PM
On Jul 15, 10:04 am, Ed Murphy <emurph...@socal.rr.com> wrote:
>
>                                                  ^^
> Color *me* impressed.  Have they got a version of MUMPS that
> works with a SQL back-end now?  (God, I hope not; I've heard
> horror stories about MUMPS.)

Mumps, according to Wikipedia: "In MUMPS syntax, some spaces are
significant; they are not merely whitespace. There are contexts in
which a pair of spaces has a different syntactic significance than a
single space"  Wow!  Amazing, firs time I've ever heard of this
archaic language!


>
> As for Ray Lopez, the reason to use an RDBMS rather than "one
> big fat flat file" is the same reason to use an optimizing
> compiler rather than hand-hacking assembler, i.e. in many
> applications the small increase in execution time is outweighed
> by a large decrease in development and maintenance time.  If
> you need things like data integrity, indexes, and transactions
> anyway, then why re-invent those wheels?

Good point.  For transactions, SQL rules, like FORTRAN does in certain
scientific circles.

RL




0
raylopez99 (937)
7/15/2007 8:06:22 PM
On Jul 15, 10:52 am, Erland Sommarskog <esq...@sommarskog.se> wrote:
>
> > I entered the parameters I wanted into the query, and it
> > worked (I was entering a new row in my Authors dB--which also gave
> > rise to a new question, kind of trivial, but why the new row is
> > automatically numbered with a successive number that does not repeat
> > even when you delete the new row later and run the query again... must
> > have something to do with a "seed" of some sort.. that is, after the
> > "3rd" row is entered, and I manually delete the row, the next time I
> > run the query the new row is not "3" but 4.  And if I delete again,
> > the next time the new row is 5, not "3".  And again, the autonumber
> > becomes 6, not "3", etc.  Not a big deal but I'm curious as to
> > why...must be some parameter that's akin to "consecutive renumbering"
> > for new rows).  
>
> It appears that your table has the IDENTITY property, and it is an
> inherit characteristic of this property that it gives you gaps. If you
> want consecutive numbers, you should generate your surrogate keys yourself.
>
> The reason IDENTITY gives you gaps, has nothing to do with relational
> theory per se, but it's all about concurrency. The design permits very
> many concurrent inserts being carried out.
>

Thanks again Erland--you are a treasure trove of information!  If I
could pay you I would; it's like having a virtual tutor in SQL!

I did indeed study the IDENTITY property (in VS2005 Visual C++/C#
"properties" are the new "global variable" of an object).  Below is
what I found.  I played around a bit and found that you cannot 'turn
off' the Identity Property easily.  Your insertion of a record (row)
fails if Identity is turned off.  Thus I am puzzled by the statement
below (from the Help file):   ***Note  If an identity column exists
for a table with frequent deletions, gaps can occur between identity
values. If you want to avoid such gaps, do not use the identity
property.***

How can you avoid such gaps by not using the Identity property?  How
will your rows be autonumbered?  Right now my AuthorID is simply a int
value that is autonumbered using the IDENTITY property.

Please be advised:  I don't care if there are "gaps"--it makes no
difference to me--this is an academic question.  No need to explain,
as I feel I've already taken up too much of your time (later, when I
actually do some serious programming in SQL, I'll probably need your
advice so I have to conserve on my goodwill here, haha).  I'm sure you
can somehow set up a complicated (or maybe for you, simple) stored
procedure of some sort to increment a number used in a column and
decrement the same number whenever rows are added or deleted,
respectively.  I can do the same in C++.  And again, I don't see the
need (unless you are a neat freak and like consecutive numbers with no
gaps for your AuthorID).  Just a curiousity at this point.  [UPDATE:
rereading this before I hit the Send key, I notice a way out:  perhaps
the trick is to use the AuthorID as a NON-Primary Key Column.  RIght
now, in my Authors table, I use a Primary Key column with the Identity
Property, and you cannot use a Primary Key column that contains a NULL
value.  Inspiration for this thought:  "Also, it cannot be set for a
primary key column."]{Update to the Update:  tried using Identity
Property turned OFF for a new column that allows Nulls, and it did not
work to generate a consecutive number, in fact the insert Record/Row
procedure**(below reproduced) failed to work}

Thanks again,

Ray

>From Visual Studio 2005's huge Help file on the IDENTITY property
(huge but often quite useless!):

How to: Modify Column Identity Properties
You can change the identity properties of a column if you want to
redefine the sequential numbers that are automatically generated and
stored in that column when new records are added to the table. You can
set the identity properties on only one column per table.

Columns that have the identity property contain system-generated
sequential values that uniquely identify each row within a table (for
example, employee identification numbers). When inserting values into
a table with an identity column, Microsoft SQL Server automatically
generates the next identifier based on the last used identity value
(the identity seed property) and the increment value (the identity
increment property) specified during the creation of the column.

Note
The identity property can be set only for a column that disallows null
values and whose data type is decimal, int, numeric, smallint, bigint,
or tinyint. Also, it cannot be set for a primary key column.


To modify the identity properties for a column
In Server Explorer, right-click the table with identity properties you
want to modify and click Open Table Definition.

The table opens in Table Designer.

[STUFF DELETED ON HOW TO DO THIS FROM THE TABLE DESIGNER GUI]

For example, suppose you want to automatically generate a 5-digit
Order ID for each row added to the orders table, beginning with 10000
and incremented by a value of 10. To do this, you would type an
Identity Seed of 10000, and type an Identity Increment of 10.

If you change any of the identity properties for a table, the existing
identity values will be preserved. Your new settings apply only to new
rows that are added to the table.

***Note
If an identity column exists for a table with frequent deletions, gaps
can occur between identity values. If you want to avoid such gaps, do
not use the identity property.***

--

** the Stored Procedure in question in this thread, to insert a new
row in the Authors table, which has three columns, AuthorID, LastName,
FirstName:

ALTER PROCEDURE dbo.InsertAuthor2

	(
		@LastName NVARCHAR(32) = NULL,
		@FirstName NVARCHAR(32) = NULL
	)
AS
	/* SET NOCOUNT ON  */
	INSERT INTO Authors (LastName, FirstName)
	VALUES				(@LastName, @FirstName)
	RETURN

0
raylopez99 (937)
7/15/2007 9:02:30 PM
raylopez99 (raylopez99@yahoo.com) writes:
> How can you avoid such gaps by not using the Identity property?  How
> will your rows be autonumbered?  Right now my AuthorID is simply a int
> value that is autonumbered using the IDENTITY property.
 
Without IDENTITY, no auto-number. You will have to roll your own. Which 
is not very difficult at all:

   BEGIN TRANSACTION

   SELECT @id = coalesce(MAX(id), @id) + 1 FROM tbl WITH (UPDLOCK)

   INSERT tbl (id, col1, ....)
      VALUES (@id, @val1, ....)

   COMMIT TRANSACTION

Typically you do this in a stored procedure.

The transaction and UPDLOCK is needed to avoid that two processes gets the
same number, and one of them dies with a PK violation. This solution works
fine, as long you don't have a high degree on concurrency.

I also like to repeat that while system-generated ids are sometimes 
necessary, there are many tables for which there is a usable natural 
key, in which case you should use that. And, in the cases, you use a
system-generated id, you should still strive to identify a set of columns
in the table that uniquely identify a row.
-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
7/15/2007 9:51:55 PM
"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns996ECA8E4FCA0Yazorman@127.0.0.1...
> .
> There are more possibilities to maintain data integrity than relational 
> databases. But
> it is a complex matter, and relational databases has proven to do a
> decent job of it.
>.

The idea of 'integrity' in sql is immature. Unfortunately most here are only 
familiar with
what is served up as 'integrity' in bol. Imagine if sql was as 'dedicated' 
to integrity as
it is to performance. What sort of 'integrity' features would it have? Could 
you call
it more than 'decent'? Let sql do what it does best - access data. Using sql 
to define
a schema is using it at what it does not do best. Such a simple concept -:)

www.beyondsql.blogspot.com'



0
steve9948 (2)
7/15/2007 10:03:49 PM
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@acm.org> wrote in message 
news:1184492430.170344.262610@q75g2000hsh.googlegroups.com...
>.
> I agree that SQL is well overdue for replacement. Unfortunately it
> can't happen overnight.

Let me clarify your thought - it can replaced if and only if MS does it 
<cringe> :P -:)

www.beyondsql.blogspot.com





0
steve9948 (2)
7/15/2007 10:10:42 PM
On Jul 15, 3:10 pm, "Steve Dassin" <st...@nospamrac4sql.net> wrote:
> "David Portas" <REMOVE_BEFORE_REPLYING_dpor...@acm.org> wrote in message

> Let me clarify your thought - it can replaced if and only if MS does it
> <cringe> :P -:)
>
> www.beyondsql.blogspot.com

Best of luck Steve Dassin.  Took a quick look at your blog, and D4
product, and it looks like it has a lot of thought behind it (based on
just a look and feel--I measure my cummulative exposure to SQL in
hours, not years like you guys here!).

Hope your "OO" SQL (from what I can tell) really does well--we need
something more in the 21st century than the procedural, interpreted
dinosaur SQL seems to be.  Realising of course that even with a
dinosaur language you can do serious work, as any engineer who uses
"Perl" or "Matlab" will tell you.

Maybe MSFT or ORCL will buy you out, just to keep your special sauce
from cannibalizing their own product, LOL.

Here's hoping.

RL




0
raylopez99 (937)
7/15/2007 10:49:49 PM
Hello Ray,

Thanks for at least taking a look at the blog -:)
It's not my product, I'm just a very enthusiastic advocate
of D4. I 'get it', of course I get sql too -:)
I'm trying to generate some interest in it here but it's
like pulling teeth. Pulling people out of their sql
comfort zone is no easy task. At least the excuses are
entertaining -:) Introducing advanced concepts like a
'table', a 'row', a 'list' and extending the concept of
a 'variable' seems like anathema to the sql community -:(
I guess I'll have to be more aggressive in convincing
people just what the big picture is. Such confusion, such
misunderstandings. But eventually some, especially those
that have some understanding of application development,
will see what I'm talking about, get it and take the leap.
For those that see themselves as basically sql programmers
my job is tougher. They will need solid quality re-learning.

best,
steve

"raylopez99" <raylopez99@yahoo.com> wrote in message
news:1184539789.935459.24010@d55g2000hsg.googlegroups.com...
> On Jul 15, 3:10 pm, "Steve Dassin" <st...@nospamrac4sql.net> wrote:
> > "David Portas" <REMOVE_BEFORE_REPLYING_dpor...@acm.org> wrote in message
>
> > Let me clarify your thought - it can replaced if and only if MS does it
> > <cringe> :P -:)
> >
> > www.beyondsql.blogspot.com
>
> Best of luck Steve Dassin.  Took a quick look at your blog, and D4
> product, and it looks like it has a lot of thought behind it (based on
> just a look and feel--I measure my cummulative exposure to SQL in
> hours, not years like you guys here!).
>
> Hope your "OO" SQL (from what I can tell) really does well--we need
> something more in the 21st century than the procedural, interpreted
> dinosaur SQL seems to be.  Realising of course that even with a
> dinosaur language you can do serious work, as any engineer who uses
> "Perl" or "Matlab" will tell you.
>
> Maybe MSFT or ORCL will buy you out, just to keep your special sauce
> from cannibalizing their own product, LOL.
>
> Here's hoping.
>
> RL
>
>
>
>


0
Steve
7/16/2007 12:19:22 AM
> Mumps, according to Wikipedia: "In MUMPS syntax, some spaces are
> significant; they are not merely whitespace. There are contexts in
> which a pair of spaces has a different syntactic significance than a
> single space"  Wow!  Amazing, firs time I've ever heard of this
> archaic language!

As you can probably guess, MUMPS can be a developer's worst nightmare. 
Anecdotal evidence: 
http://worsethanfailure.com/Articles/A_Case_of_the_MUMPS.aspx

-- 
Hope this helps.

Dan Guzman
SQL Server MVP

"raylopez99" <raylopez99@yahoo.com> wrote in message 
news:1184529982.384234.39370@g4g2000hsf.googlegroups.com...
> On Jul 15, 10:04 am, Ed Murphy <emurph...@socal.rr.com> wrote:
>>
>>                                                  ^^
>> Color *me* impressed.  Have they got a version of MUMPS that
>> works with a SQL back-end now?  (God, I hope not; I've heard
>> horror stories about MUMPS.)
>
> Mumps, according to Wikipedia: "In MUMPS syntax, some spaces are
> significant; they are not merely whitespace. There are contexts in
> which a pair of spaces has a different syntactic significance than a
> single space"  Wow!  Amazing, firs time I've ever heard of this
> archaic language!
>
>
>>
>> As for Ray Lopez, the reason to use an RDBMS rather than "one
>> big fat flat file" is the same reason to use an optimizing
>> compiler rather than hand-hacking assembler, i.e. in many
>> applications the small increase in execution time is outweighed
>> by a large decrease in development and maintenance time.  If
>> you need things like data integrity, indexes, and transactions
>> anyway, then why re-invent those wheels?
>
> Good point.  For transactions, SQL rules, like FORTRAN does in certain
> scientific circles.
>
> RL
>
>
>
> 

0
guzmanda (529)
7/16/2007 12:43:29 AM
Dan Guzman (guzmanda@nospam-online.sbcglobal.net) writes:
>> Mumps, according to Wikipedia: "In MUMPS syntax, some spaces are
>> significant; they are not merely whitespace. There are contexts in
>> which a pair of spaces has a different syntactic significance than a
>> single space"  Wow!  Amazing, firs time I've ever heard of this
>> archaic language!
> 
> As you can probably guess, MUMPS can be a developer's worst nightmare. 
> Anecdotal evidence: 
> http://worsethanfailure.com/Articles/A_Case_of_the_MUMPS.aspx
 
Actually, my first programming job ever was in MUMPS! It was only for
the summer though.

The language certainly was weird. In one way it was very high-level: you 
did not have to open files and that. Just put a ^ in front of a variable
name, and it was on disk. (I did not knew what a "database" was then.)
On the other hand, some stuff were low-level, for instance the syntax
did not permit indentation.

The most weird thing I did was to implement some search-and-replace
utility. This was not a regular program, but it resided in a global
array (global = on disk). I found that this brought me to the low
level that I did not any more have the luxury of a program counter,
but at the end of each row, I had to explicitly say which "line" to
go to next.

I'm surprised to see that it still around.


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
7/16/2007 8:32:58 AM
On Jul 15, 5:19 pm, "Steve Dassin" <rac4sqlnospam@net> wrote:
> Hello Ray,
>
> Thanks for at least taking a look at the blog -:)
> It's not my product, I'm just a very enthusiastic advocate
> of D4. I 'get it', of course I get sql too -:)
> I'm trying to generate some interest in it here but it's
> like pulling teeth. Pulling people out of their sql
> comfort zone is no easy task. At least the excuses are
> entertaining -:) Introducing advanced concepts like a
> 'table', a 'row', a 'list' and extending the concept of
> a 'variable' seems like anathema to the sql community -:(
> I guess I'll have to be more aggressive in convincing
> people just what the big picture is. Such confusion, such
> misunderstandings. But eventually some, especially those
> that have some understanding of application development,
> will see what I'm talking about, get it and take the leap.
> For those that see themselves as basically sql programmers
> my job is tougher. They will need solid quality re-learning.
>
> best,
> steve
>

Well hang in there!  Worse that can happen is that as a pioneer, you
educate the Great Unwashed Masses, convert some Early Adopters to your
product, after many years of trying, then some new foreign firm paying
slave programmer wages comes in and steals your market share.  But you
get to be known as a pioneer, like Dan Bricklin and VisiCalc.

My book sez:  "OODBMS were intially developed in the 1990s, but have
not been commercially successful yet.  One problem is that
organizations currently have large volumes of data in hierarchical and
relational formats, and it is expensive and time consuming to migrate
the data to object classes.  Another problem is that object-oriented
databases have not yet been able to yield the performance needed for
applications that must process high transaction volumes quickly. [THIS
LAST SENTENCE SOUNDS LIKE WHAT WE DISCUSSED IN THIS THREAD, WITH FLAT
FILES AND CAM MEMORY.  I BET AS H/W BECOMES FASTER THIS LAST CONCERN
WILL BE LESS IMPORTANT.]" from p. 9 Joline Morrison SQL book.

RL

0
raylopez99 (937)
7/16/2007 9:44:59 AM
On Jul 15, 2:40 am, David Portas
<REMOVE_BEFORE_REPLYING_dpor...@acm.org> wrote:
> On 15 Jul, 01:05,raylopez99<raylope...@yahoo.com> wrote:

> > Problem:
>
> > I replicated, using Frasier "Pro Visual C++/CLI" (2006), Chap 12, a
> > database that has a relationship between two tables, called Content
> > and Author, using a common column, called "AuthorID".  I used the
> > Server Explorer insider Visual Studio 2005 to do this.  It worked.
>
> > Now I tried the exact same thing from Visual Studio 2005 but this time
> > using C# not C++ as my language of choice.  The interface is slightly
> > different, but I made sure everything was done as before.  I used the
> > Server Explorer 'GUI' to do this, clicking and following the 'wizards'
> > as before.
>
> > But somehow, when using C# rather than C++, I get this error when
> > attempting to create a relationship between tables when using two
> > identical columns called "AuthorID":  "the columns in table Authors do
> > not match an existing primary key or UNIQUE constraint" Why?  The
> > columns are the same. I even tried (and this made no difference)
> > copying and pasting columns from one table to the other, but still I
> > get this error.
>
> > I tried everything, even deleting columns and rebuilding them, which
> > eventually created a new error that fatally compromised the database
> > (if you're curious, and as an aside, it created this error: "'Authors'
> > table - Unable to modify table.   Cannot insert the value NULL into
> > column 'AuthorID',  able 'DCV3_CSharp.dbo.Tmp_Authors'; column does
> > not allow nulls. INSERT fails.
> > The statement has been terminated.")

UPDATE:  I figured out a workaround.  The bug occurs before the tables
are saved.  So, you should create the tables, save them, then try the
"Relationship" (which is simply a CONSTRAINT between foreign and
primary keys).  THis then works from within Visual Studio 2005.

So it's a minor bug after all.

RL


0
raylopez99 (937)
7/18/2007 8:12:08 PM
Reply:

Similar Artilces:

My database hurts "ERROR: duplicate key violates unique constraint"
(Please CC me on any replies as I'm not on the list) Hi, After a recent power failure, a program that uses a pgsql backend (netdisco) started to send me nastygrams. I tried the author's suggestion of running a VACUUM FULL ANALYZE VERBOSE;, but it still sends me the messages. The data in the database isn't too important, i.e. I'm willing to blow away data to try to fix the problem. Any suggestions? Thanks for your help and thanks on behalf of my whole department for postgresql. Mike ----- Forwarded message from Max Baker <max@warped.org> ----- Dat...

UNIQUE constraint on non-"key" columns
Hello. After waking up at 04:30 and not being able to go to sleep again, a thought crossed my mind, regarding a table of inventory items. You have a table like this: CREATE TABLE items ( item_no VARCHAR(15) NOT NULL, item_descr VARCHAR(40) NOT NULL UNIQUE (item_no) ) My thought was that item_descr rather should be UNIQUE as well, because how would you otherwise tell one item from another, except for the item_no? This implies that item_no is nothing else than a shorthand for item_descr. An example. Suppose you are into selling clothes, and you have the followi...

SP2 Database Mirroring
I have 2 SQL Enterprise Editions running SP2 and same collation. When i try to start database mirroring I get the following error message: "The remote copy of database "ABC" has not been rolled forward to a point in time that is emcompassed in the local copy of the database log. (Microsoft SQL Server, Error: 1412)" Both SQL Server services are running under account Domainname\myusername. On both servers SQL Logins I made Domainname\myusername a member of sysadmin. On the Principal server I created a database using FULL recovery mode, made a backup and restored it on the...

Display the "From" column instead of the "Who" column.
Display the "From" column instead of the "Who" column. How dii I do that? I'm using Eudora Ver. 7.1.0.9 in Paid Mode ...

What exactly are "Constraint programming" & "constraint databases" about ?
I run into these terms rather frequently, without ever getting/finding a good explanation as to what it's really about. Anyone here can help me ? On 2015-12-29 14:16:11 +0000, Erwin said: > I run into these terms rather frequently, without ever getting/finding > a good explanation as to what it's really about. > > Anyone here can help me ? Since it seems that Springer has given free access to its books up to 2004, if you are interested you may download this: http://link.springer.com/book/10.1007/978-3-662-04031-7 Nicola --- news://freenews.ne...

kadmin and other errors: "Master key does not match database while initializing ..."
My Kadmin daemon will no longer start. It gives me: [root@kdc3 root]# /etc/init.d/kadmin start Starting Kerberos 5 Admin Server: kadmind: Master key does not match database while initializing, aborting I get a similar error when I do "krb5_util dump file.dump". From the Kerberos FAQ it sounds like a problem with my kerberos database but I didn't find any references on how to fix it. Can someone point me in the right direction? This is Fedora Core 1. Let me know what other relevant information might provide useful. Thanks Austin ______________________________________...

Unique constraint or index as in Oracle and "alter table alter column"
Is there any way I can define an Unique constraint or unique index which allows more than one null values for the same column combination in DB2? ie, If my index is defined on (col3, col4) where both columns allow nulls, I want col3 + col4 to be unique, if one or both the columns have values. If both columns have nulls, it should allow more than one such rows. ex, 1 null -> unique 2 null -> unique 2 null -> duplicate --> reject it null null -> unique null null -> unique --> accept it (both values are nulls) null 1 -> unique. Also, is...

Display the "From" column instead of the "Who" column. #2
I apologise for my previous message. I*t was senmt before In had finished writing it. In reply to McGyver, John H Meyers wrote: Display the "From" column instead of the "Who" column. "View source" should also help to reassure you that all is well with your actual "headers" :) How do I do that? TIA, Stubass On 8/24/2011 9:16 PM, Stubass wrote: > In reply to McGyver, John H Meyers wrote: >> Display the "From" column instead of the "Who" column. >> "View source" should also help to...

Can I replicate database "foo" to database "bar"
Let's say that I have two servers, server M with database "foo" and server S with database "bar", with full control over them. Can I set up replication so that "foo" is replicated into "bar". (the issue is that I have many masters with the same database name foo, and I want to have one slave server with databases like foo-1, foo-2 etc, replicated from all those masters) i On 14 Aug, 22:55, Ignoramus32732 <ignoramus32...@NOSPAM.32732.invalid> wrote: > Let's say that I have two servers, server M with database "foo" and > se...

The Microsoft Jet database engine cannot find a record in the table"with key matching field(s)"
Please can you help me with this problem: In the next form, when I Add new person who is not registered in the base I had the massage: The Microsoft Jet database engine cannot find a record in the table 'CLAN' with key matching field(s) JMBG. The question is: How to add data for the unregistered visitors in the next fields: JMBG (unique person number);(translation) STATUS; IME (name); PREZIME (surname); GRAD (town); UPLACENO (payment) NAPOMENA (comment) If you want I can send mdb.zip. Thank you very much. ...

Problem updating a SQL Server database table "text" column from a datawindow using a stored procedure.
Hi everyone, I can't update a SQL Server database table "text" column from a datawindow using a stored procedure. Using the Powerbuilder (9.0.1) native datawindow update, the darn thing works fine. If I try to update using a stored procedure, I receive the following errors: "A SQLServer request resulted in a bad return code or status but no error message was returned No changes made to database." Followed by a PB error message: "Incorrect Syntax near '&'" The column type is listed as "String 4099". Updating the...

ERROR: column "id" does not exist
Hy i have a problem with the following statement: SELECT MAX(id) FROM apconfig; if i send this statement i got the following error: ERROR: column "id" does not exist Can anyone explain me this error? There is the column id, and there are also entries in it. thx ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) On Mon, Nov 15, 2004 at 01:44:47PM +0100, Sandro Daehler wrote: &g...

"Ghost" primary key column
Hello, I have a table 'customers', with 2 records: SELECT * FROM customers; customerID | customerName ------------------+------------------------- myFriend | myFriend's Name test | testing user (2 rows) but when I'm asking about customerID column, I get the answer: SELECT customerID FROM customers; ERROR: column "customerid" does not exist What happens? I'm using PostgreSQL 8.1.3 >> What happens? I'm using PostgreSQL 8.1.3 << Why are you positng to a SQL Server Newsgroup? And why did you fail to provide DDL, anyway? (c...

What are some methods of getting the "match field" or "primary key"?
We are all using FMP 5.5 under Windows XP or Windows 2000 and FileMaker Server 5.5. I need to make a new database listing where all our roof ladders are located (I work for a county government) in each building. I already have a Building file that lists all of our buildings along with their names, address, city, zip, etc etc. But I would like to make it a relational file also so that no one needs to update the Buildings file or the Ladders file if the building names or address should change. [They probably won't but the idea is also to have the building names and addresses be exactl...

Web resources about - Relationships error, C# Visual Studio 2005 database bug?, "the columns in table XYZ do not match an existing primary key or UNIQUE constraint", copying columns - comp.databases.ms-sqlserver

Resources last updated: 2/7/2016 4:07:30 AM