Database Tripled In Size!!

Yikes! My database, which had been consistently 1 gig for a long time, went 
from being 1 gig to 3 gigs overnight! Looking at the nightly backups, the 
database increased on average about 5-15 MB per day, and was 1.06 GB on the 
Thursday night backup. Then, with the Friday night backup, it was 2.95 GB, 
and has stayed that way since!

I did a Shrink on the database, but that didn't help the situation.

The only thing I could think it might relate to is the following. I 
previously (about a week ago) changed a couple of tables' DateModified field 
from smalldatetime to datetime. (I posted about this under a separate thread 
here.) For some reason I was getting occasional errors which I believe might 
have been related to the new data type. So I decided to change the data 
types back to smalldatetime.

I made the table changes Thursday night, right before the backup, but after 
the database optimizations. The backup Thursday night still shows the small 
database size. But the backup Friday night has the large size.

So this might not be related to the table changes at all. But I know for a 
fact that there isn't 3x the data in the database. Somehow the database is 
bloated to 3x its size, and it's not from actual data.

Any ideas about what I can do?

Thanks!

Neil 


0
nospam52 (1566)
12/24/2007 1:37:57 AM
comp.databases.ms-sqlserver 14578 articles. 0 followers. janinepelletier (108) is leader. Post Follow

23 Replies
104 Views

Similar Articles

[PageSpeed] 39
On Mon, 24 Dec 2007 01:37:57 GMT, "Neil" <nospam@nospam.net> wrote:

Check Database Properties > Files Page. It will show you how much the
db should grow once it fills up. 1MB by default, but you might have
changed it..

3GB is still very small, and hard disk space is cheap.

-Tom.



>Yikes! My database, which had been consistently 1 gig for a long time, went 
>from being 1 gig to 3 gigs overnight! Looking at the nightly backups, the 
>database increased on average about 5-15 MB per day, and was 1.06 GB on the 
>Thursday night backup. Then, with the Friday night backup, it was 2.95 GB, 
>and has stayed that way since!
>
>I did a Shrink on the database, but that didn't help the situation.
>
>The only thing I could think it might relate to is the following. I 
>previously (about a week ago) changed a couple of tables' DateModified field 
>from smalldatetime to datetime. (I posted about this under a separate thread 
>here.) For some reason I was getting occasional errors which I believe might 
>have been related to the new data type. So I decided to change the data 
>types back to smalldatetime.
>
>I made the table changes Thursday night, right before the backup, but after 
>the database optimizations. The backup Thursday night still shows the small 
>database size. But the backup Friday night has the large size.
>
>So this might not be related to the table changes at all. But I know for a 
>fact that there isn't 3x the data in the database. Somehow the database is 
>bloated to 3x its size, and it's not from actual data.
>
>Any ideas about what I can do?
>
>Thanks!
>
>Neil 
>
0
12/24/2007 6:39:38 AM
>>Yikes! My database, which had been consistently 1 gig for a long time, 
>>went
>>from being 1 gig to 3 gigs overnight! Looking at the nightly backups, the
>>database increased on average about 5-15 MB per day, and was 1.06 GB on 
>>the
>>Thursday night backup. Then, with the Friday night backup, it was 2.95 GB,
>>and has stayed that way since!
>>
>>I did a Shrink on the database, but that didn't help the situation.
>>
>>The only thing I could think it might relate to is the following. I
>>previously (about a week ago) changed a couple of tables' DateModified 
>>field
>>from smalldatetime to datetime. (I posted about this under a separate 
>>thread
>>here.) For some reason I was getting occasional errors which I believe 
>>might
>>have been related to the new data type. So I decided to change the data
>>types back to smalldatetime.
>>
>>I made the table changes Thursday night, right before the backup, but 
>>after
>>the database optimizations. The backup Thursday night still shows the 
>>small
>>database size. But the backup Friday night has the large size.
>>
>>So this might not be related to the table changes at all. But I know for a
>>fact that there isn't 3x the data in the database. Somehow the database is
>>bloated to 3x its size, and it's not from actual data.
>>
>>Any ideas about what I can do?
>>
>>Thanks!
>>
>>Neil
>>
"Tom van Stiphout" <no.spam.tom7744@cox.net> wrote in message 
news:sukum3hs6q2hmn7hrtvje819ur81jna20c@4ax.com...
> On Mon, 24 Dec 2007 01:37:57 GMT, "Neil" <nospam@nospam.net> wrote:
>
> Check Database Properties > Files Page. It will show you how much the
> db should grow once it fills up. 1MB by default, but you might have
> changed it..
>
> 3GB is still very small, and hard disk space is cheap.
>
> -Tom.
>
>
>

In File Properties (which I've never touched), it's set to Automatically 
grow file by 10%, with unrestricted filegrowth.

And I realized that 3 GB isn't that large. Still, the fact remains that it 
couldn't have tripled in size overnight though data entry. So there has to 
be something else going on there. If there's bloating, then it would be good 
to get rid of it. But, as noted, Shrink Database doesn't bring it back down. 
So I don't know what's going on.


0
nospam52 (1566)
12/24/2007 7:29:01 AM
"Neil" <nospam@nospam.net> wrote in message 
news:V3Ebj.22$L22.18@newssvr11.news.prodigy.net...
> Yikes! My database, which had been consistently 1 gig for a long time, 
> went from being 1 gig to 3 gigs overnight! Looking at the nightly backups, 
> the database increased on average about 5-15 MB per day, and was 1.06 GB 
> on the Thursday night backup. Then, with the Friday night backup, it was 
> 2.95 GB, and has stayed that way since!
>
> I did a Shrink on the database, but that didn't help the situation.
>
> The only thing I could think it might relate to is the following. I 
> previously (about a week ago) changed a couple of tables' DateModified 
> field from smalldatetime to datetime. (I posted about this under a 
> separate thread here.) For some reason I was getting occasional errors 
> which I believe might have been related to the new data type. So I decided 
> to change the data types back to smalldatetime.
>
> I made the table changes Thursday night, right before the backup, but 
> after the database optimizations. The backup Thursday night still shows 
> the small database size. But the backup Friday night has the large size.
>
> So this might not be related to the table changes at all. But I know for a 
> fact that there isn't 3x the data in the database. Somehow the database is 
> bloated to 3x its size, and it's not from actual data.
>
> Any ideas about what I can do?
>
> Thanks!
>
> Neil
>


Here's one thought I had: is it possible to look at a breakdown of the 
objects in the database, and how much space each one is taking up? Perhaps 
that would help to determine what's going on here. 


0
nospam52 (1566)
12/24/2007 7:40:02 AM
Neil (nospam@nospam.net) writes:
> So this might not be related to the table changes at all. But I know for a 
> fact that there isn't 3x the data in the database. Somehow the database is 
> bloated to 3x its size, and it's not from actual data.
> 
> Any ideas about what I can do?
 
A very simple advice is: nothing. 3GB is not a very big database, and 
I would not lose sleep over a database growing from 1GB to 3GB. Well,
maybe if I were on Express where there is a size limit, but I know you
aren't.

If you really want to find out what happened, first examine whether
it's the log file or the data that have expanded. sp_helpdb gives you
that information.

If the log file has expanded, that may be related to your change of
smalldatetime to datetime - but changing back is going to increase the
log again. In this case you should consider shrinking the log. Normally
shrinking the log is not a good idea, because it will grow again, but
if the log has grown because of some exceptional event, it's reasonable
to shrink it.

If the data file has expanded, this query shows usage per table:

   select object_name(id), reserved, used, dpages
   from   sysindexes
   where  indid in (0,1)
   order  by reserved desc
   
The number are in pages of 8192 bytes. "reserved" is the total that is
reserved for the table, including indexes. "used" is how many pages of
these that actually are in use and "dpages" is the number of data pages.

If there is any table that is suspiciously big, check if any indexes has
been added recently. Also run DBCC SHOWCONTIG to see whether there is 
any fragmentation.


But most of all: I wish you a really Merry Christmas, and only look
at your expanded database if you get really bored by Christmas festivities -
which I hope you don't!

-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/24/2007 9:47:02 AM
"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A106EF2E30A4Yazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>> So this might not be related to the table changes at all. But I know for 
>> a
>> fact that there isn't 3x the data in the database. Somehow the database 
>> is
>> bloated to 3x its size, and it's not from actual data.
>>
>> Any ideas about what I can do?
>
> A very simple advice is: nothing. 3GB is not a very big database, and
> I would not lose sleep over a database growing from 1GB to 3GB. Well,
> maybe if I were on Express where there is a size limit, but I know you
> aren't.
>
> If you really want to find out what happened, first examine whether
> it's the log file or the data that have expanded. sp_helpdb gives you
> that information.
>
> If the log file has expanded, that may be related to your change of
> smalldatetime to datetime - but changing back is going to increase the
> log again. In this case you should consider shrinking the log. Normally
> shrinking the log is not a good idea, because it will grow again, but
> if the log has grown because of some exceptional event, it's reasonable
> to shrink it.
>
> If the data file has expanded, this query shows usage per table:
>
>   select object_name(id), reserved, used, dpages
>   from   sysindexes
>   where  indid in (0,1)
>   order  by reserved desc
>
> The number are in pages of 8192 bytes. "reserved" is the total that is
> reserved for the table, including indexes. "used" is how many pages of
> these that actually are in use and "dpages" is the number of data pages.
>
> If there is any table that is suspiciously big, check if any indexes has
> been added recently. Also run DBCC SHOWCONTIG to see whether there is
> any fragmentation.
>
>
> But most of all: I wish you a really Merry Christmas, and only look
> at your expanded database if you get really bored by Christmas 
> festivities -
> which I hope you don't!
>
>

Thanks, Erland. Yeah, the log file's only 768 KB, whereas the MDF file is 
3.3 GB.

Also, I ran the query you gave, and all the tables appear to be the sizes 
they should be. At least none seemed very large, large enough to account for 
2 GB.

I appreciate you saying not to worry about it. But, still, how could a 
database that has been steady at 1 GB just all of sudden go from 1 GB to 3 
GB in one fell swoop, for no apparent reason. And, if it did do that (and 
never did anything like that before), wouldn't that mean that performance 
would suffer, if there's 2 GB worth of garbage in there somehow?

Thanks, and I wish you a very Merry Christmas as well. Thanks for you 
continual help!

Neil 


0
nospam52 (1566)
12/24/2007 11:29:35 AM
Neil (nospam@nospam.net) writes:
> Also, I ran the query you gave, and all the tables appear to be the
> sizes they should be. At least none seemed very large, large enough to
> account for 2 GB. 

In that case sp_spaceused for the database should report a lot of free
space.
 
> I appreciate you saying not to worry about it. But, still, how could a 
> database that has been steady at 1 GB just all of sudden go from 1 GB to 3 
> GB in one fell swoop, for no apparent reason. And, if it did do that (and 
> never did anything like that before), wouldn't that mean that performance 
> would suffer, if there's 2 GB worth of garbage in there somehow?

Do you run a regular maintenance job on the database that defragments
indexes? It might be that when you changed those columns to datetime from
smalldatetime, the tables had to be build entirely on new ground. That
is, all tables were moved and to get space to move them, the database
exploded.

It is not likely that this will cause any performance problems. Trying
to shrink the database may on the other hand, as shrinking leads to
fragmentation. To truly shrink it, you would have to rebuild from 
scripts and reload. Definitely not worth it.
 


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/24/2007 2:31:19 PM
"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A109F25D9F96Yazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>> Also, I ran the query you gave, and all the tables appear to be the
>> sizes they should be. At least none seemed very large, large enough to
>> account for 2 GB.
>
> In that case sp_spaceused for the database should report a lot of free
> space.
>

It shows about half a gig of unused space. Here's the printout:

database_size      unallocated space
--------------------------------------
3355.75 MB         -2571.25 MB

reserved           data               index_size         unused
------------------ ------------------ ------------------ ------------------ 
6069248 KB         2477728 KB         3066760 KB         524760 KB


>> I appreciate you saying not to worry about it. But, still, how could a
>> database that has been steady at 1 GB just all of sudden go from 1 GB to 
>> 3
>> GB in one fell swoop, for no apparent reason. And, if it did do that (and
>> never did anything like that before), wouldn't that mean that performance
>> would suffer, if there's 2 GB worth of garbage in there somehow?
>
> Do you run a regular maintenance job on the database that defragments
> indexes?

The maintenance job that is run nightly performs the following:

Optimizations tab:

Reorganize data and index pages
     (change free space per page percentage to 10%)

Remove unused space from database files
    (shrink database when it grows beyond 50 MB)
    (amount of free space to remain after shrink: 10% of the data space)

Integrity tab:

Check database integrity
    (include indexes)
        (attempt to repair any minor problems)

Thanks!

Neil


> It might be that when you changed those columns to datetime from
> smalldatetime, the tables had to be build entirely on new ground. That
> is, all tables were moved and to get space to move them, the database
> exploded.
>
> It is not likely that this will cause any performance problems. Trying
> to shrink the database may on the other hand, as shrinking leads to
> fragmentation. To truly shrink it, you would have to rebuild from
> scripts and reload. Definitely not worth it.
>
>
>
> -- 
> Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
>
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx 


0
nospam52 (1566)
12/24/2007 9:07:51 PM
Neil (nospam@nospam.net) writes:
> It shows about half a gig of unused space. Here's the printout:
> 
> database_size      unallocated space
> --------------------------------------
> 3355.75 MB         -2571.25 MB
> 
> reserved           data               index_size         unused
> ------------------ ------------------ ------------------ -----------------
> 6069248 KB         2477728 KB         3066760 KB         524760 KB

The negative number for unallocated space is spooky. Run it again,
but now like this:

   sp_spaceused NULL, true

That will make sure the values in sysindexes are updated.

By the way, are you still on SQL 7? I seem to recall that you talked
up moving on to SQL 2005, but did that materialise?
 
> The maintenance job that is run nightly performs the following:
> 
> Optimizations tab:
> 
> Reorganize data and index pages
>      (change free space per page percentage to 10%)

That's OK.
 
> Remove unused space from database files
>     (shrink database when it grows beyond 50 MB)
>     (amount of free space to remain after shrink: 10% of the data space)

But remove this one. Shrinking the database is not a good thing to do
on regular terms.
 

-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/24/2007 11:21:57 PM
"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A114FA9BFBDYazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>> It shows about half a gig of unused space. Here's the printout:
>>
>> database_size      unallocated space
>> --------------------------------------
>> 3355.75 MB         -2571.25 MB
>>
>> reserved           data               index_size         unused
>> ------------------ ------------------ ------------------ -----------------
>> 6069248 KB         2477728 KB         3066760 KB         524760 KB
>
> The negative number for unallocated space is spooky. Run it again,
> but now like this:
>
>   sp_spaceused NULL, true
>
> That will make sure the values in sysindexes are updated.
>

OK, here it is:

database_size      unallocated space
------------------ ------------------ 
3355.75 MB         338.75 MB

reserved           data               index_size         unused
------------------ ------------------ ------------------ ------------------ 
3089408 KB         2480768 KB         166896 KB          441744 KB



> By the way, are you still on SQL 7? I seem to recall that you talked
> up moving on to SQL 2005, but did that materialise?

It's still upcoming. Hopefully in the next month or two. You think that 
might make a difference with this situation?

>
>> The maintenance job that is run nightly performs the following:
>>
>> Optimizations tab:
>>
>> Reorganize data and index pages
>>      (change free space per page percentage to 10%)
>
> That's OK.
>
>> Remove unused space from database files
>>     (shrink database when it grows beyond 50 MB)
>>     (amount of free space to remain after shrink: 10% of the data space)
>
> But remove this one. Shrinking the database is not a good thing to do
> on regular terms.

OK, removed it. Is that something I should do periodically?

Thanks!

Neil



>
>
> -- 
> Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
>
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx 


0
nospam52 (1566)
12/25/2007 1:15:37 AM
OK, Erland, here's another strange one.

I took the last backup from before the database ballooned, and I downloaded 
it to my development machine, so as to be able to look at the old vs. new 
data, to see if I missed anything.

After restoring the backup, the database size was 1.5 GB, though it was 
previously 1.0 GB. Going into Shrink Database, it showed that only 1.0 GB 
was used. So I performed a Shrink Database on the old database.

The Shrink Database took a very long time. When it was done, I was told, 
"The database has been shrunk to a size of 2 GB"!!! So it grew instead of 
shrinking!

Looking at the files, the MDF is 1.26 GB, and the LDF is 0.75 GB, for a 
total of 2 GB.

So the database that had been 1 GB grew to 1.5 after backup, and then 2.0 
after shrinking.

This is the backup that was performed right after I made the table changes. 
Remember that I said that I made the changes *after* the optimizations that 
night, but before the backup. The backup file itself is 1.1 GB. It was the 
following night's backup that grew to 3 GB.

So, apparently, making the changes, in and of themselves, didn't balloon the 
database. But sometime after that -- probably the following night's 
optimizations, which included a Shrink Database, caused it to balloon to 3 
GB.

A significant difference between my test and the actual database, though, is 
that when the old database grew to 2 GB, 0.75 GB of that was from the log 
file. With the current 3 GB database, though, the log file is less than a 
MB. The 3 GB is all in the data file.

Weird.

Neil 


0
nospam52 (1566)
12/25/2007 2:58:06 AM
Neil (nospam@nospam.net) writes:
> I took the last backup from before the database ballooned, and I
> downloaded it to my development machine, so as to be able to look at the
> old vs. new data, to see if I missed anything. 

Wait, didn't I tell you celebrate Christmas and relax?!?!?

> After restoring the backup, the database size was 1.5 GB, though it was 
> previously 1.0 GB. Going into Shrink Database, it showed that only 1.0 GB 
> was used. So I performed a Shrink Database on the old database.
> 
> The Shrink Database took a very long time. When it was done, I was told, 
> "The database has been shrunk to a size of 2 GB"!!! So it grew instead of 
> shrinking!

Haven't I told you to stop shrinking databases! There are very few 
situations where shrinking a database is a good idea. But there are many 
where it is an outright bad idea. 
 
> So, apparently, making the changes, in and of themselves, didn't balloon
> the database. But sometime after that -- probably the following night's 
> optimizations, which included a Shrink Database, caused it to balloon to
> 3 GB. 

Probably it was not until the reindexing that SQL Server allocated full 
space for the new columns.

Judging from the new numbers from sp_spaceused you posted, there is 
plenty of free space in the database. Let it stay that way, it's 
not going to do you any harm. Although the amount of unused space
is possibly a little worrying, since that indicates quite an amount
of fragmentation. But if you stop shrinking your database, the next
reindexing job should take care of that. (Shrinking introduces 
fragmentation, another reason it's bad.)

>> By the way, are you still on SQL 7? I seem to recall that you talked
>> up moving on to SQL 2005, but did that materialise?
> 
> It's still upcoming. Hopefully in the next month or two. You think that 
> might make a difference with this situation?
 
The database is not going to shrink if you upgrade to SQL 2005 if
that is what you think. Let me put it this way: you have recently
become the proud owner of a 3GB database, congratulations!





-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/25/2007 9:22:06 AM
"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A116ABC6CF5CYazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>> I took the last backup from before the database ballooned, and I
>> downloaded it to my development machine, so as to be able to look at the
>> old vs. new data, to see if I missed anything.
>
> Wait, didn't I tell you celebrate Christmas and relax?!?!?

What makes you think I'm not relaxing?.... :-)


>> So, apparently, making the changes, in and of themselves, didn't balloon
>> the database. But sometime after that -- probably the following night's
>> optimizations, which included a Shrink Database, caused it to balloon to
>> 3 GB.
>
> Probably it was not until the reindexing that SQL Server allocated full
> space for the new columns.
>
> Judging from the new numbers from sp_spaceused you posted, there is
> plenty of free space in the database. Let it stay that way, it's
> not going to do you any harm. Although the amount of unused space
> is possibly a little worrying, since that indicates quite an amount
> of fragmentation. But if you stop shrinking your database, the next
> reindexing job should take care of that. (Shrinking introduces
> fragmentation, another reason it's bad.)

I went to run another sp_spaceused, to see what it looked like, since I 
turned shrinking off a couple of days ago. But then I saw that shrinking ran 
last night. Couldn't figure that out, since I had turned it off. Turns out I 
had _two_ shrink jobs running each night! -- one with the regular 
optimizations, and one as a standalone job! Oy! So I turned the second one 
off, and we'll see what happens.

BTW, why do they even have this shrinking thing if it's not needed? I mean, 
I don't doubt you when you say it does more harm than good; but why is it 
there in the first place if it just causes fragmentation?


>
>>> By the way, are you still on SQL 7? I seem to recall that you talked
>>> up moving on to SQL 2005, but did that materialise?
>>
>> It's still upcoming. Hopefully in the next month or two. You think that
>> might make a difference with this situation?
>
> The database is not going to shrink if you upgrade to SQL 2005 if
> that is what you think.

I meant either that: a) perhaps SQL 2005 might have some superior tools for 
dealing with this; and/or b) when the database is converted to SQL 2005 the 
objects might be rewritten in such a way that it would get rid of the 
bloating. But I guess no to (a) or (b).


> Let me put it this way: you have recently
> become the proud owner of a 3GB database, congratulations!

Yay me! :-) OK, that's fine, I can live with that, as long as it's not 
problematic re. performance. But there's something I'm still not 
understanding. Forgive me for being dense, but here goes.

The second sp_spaceused showed:

database_size      unallocated space

----------------------- ------------------ ------------------ 

3320.06 MB         297.91 MB



reserved           data               index_size         unused

------------------ ------------------ ------------------ ------------------ 

3094688 KB         2483672 KB         169712 KB          441304 KB


The data is 2.4 GB, which is 1.4 GB more than the DB was previously (and 
when added to the index_size and unused, accounts for the additional 2 GB). 
Where is this 1.4 GB coming from? I mean, there's only at most 1 GB of 
actual data in the database. So what is the other 1.4 GB? Again, forgive me 
for being dense here if you've already explained it.

Thanks!

Neil


0
nospam52 (1566)
12/26/2007 10:54:38 AM
Neil (nospam@nospam.net) writes:
> BTW, why do they even have this shrinking thing if it's not needed? I
> mean, I don't doubt you when you say it does more harm than good; but
> why is it there in the first place if it just causes fragmentation? 

Good question. Well, for the shrinking facility as such there are of
course situations where it's useful. You have just erased five years of
data, and you expect the business to be a lot calmer for the next year,
so your database is now oversized. Or you took a copy of the production
database to get a development database to play with, but you only want
a fraction of the data, so you delete most and then shrink. This 
becomes even more critical if you create one database per developer.

Now, as for why they put it as an option for maintenance jobs that's a
really good question. Unfortunately, there are plenty of things in 
the tools that violate best practices for the server.

> I meant either that: a) perhaps SQL 2005 might have some superior tools
> for dealing with this; and/or b) when the database is converted to SQL
> 2005 the objects might be rewritten in such a way that it would get rid
> of the bloating. But I guess no to (a) or (b). 

The tools are about the same, but the behaviour may be a little different.

> The data is 2.4 GB, which is 1.4 GB more than the DB was previously (and
> when added to the index_size and unused, accounts for the additional 2
> GB). Where is this 1.4 GB coming from? I mean, there's only at most 1 GB
> of actual data in the database. So what is the other 1.4 GB? Again,
> forgive me for being dense here if you've already explained it. 

The figure of 2.4 GB comes from used pages. If only 100 bytes is actually
used on a page, that counts as 8192 bytes. That is, this is another token
of fragmentation.

You could run DBCC SHOWCONTIG on some big tables to have a look at
fragmentation. But if you reindex every night, and don't shrink anything,
the situation should improve.

By the way, does this query return any rows:

  SELECT * FROM sysindexes WHERE indid = 0

Or put in another way: do you have tables without a clustred index?

-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/26/2007 1:14:01 PM
> You could run DBCC SHOWCONTIG on some big tables to have a look at
> fragmentation. But if you reindex every night, and don't shrink anything,
> the situation should improve.

I ran it on the five largest tables in the db (based on the bytes used, not 
the pages). The Extent Scan Fragmentation for those five tables was: 86%, 
70%, 97%, 98%, 74%.

Another thing that might be a factor here is that I noticed that disk space 
is very low on the drive that contains the database. I have been keeping 4 
weeks worth of backups for this database. When the backup ballooned to 3 gb 
instead of 1 gb, it started eating up more drive space. Last night's backup 
(after, I think, 5 of these 3 gb backups were on the drive) wouldn't go 
through, as there was only 2 gb free on the drive. I deleted some backups, 
and changed the storage to 1 week, instead of 4, and the backup went 
through. Still, I wonder how much the limited disk space is affecting the 
current situation.


> By the way, does this query return any rows:
>
>  SELECT * FROM sysindexes WHERE indid = 0
>
> Or put in another way: do you have tables without a clustred index?

Yes, apparently so. The query returned 51 rows.

Thanks,

Neil 


0
nospam52 (1566)
12/27/2007 12:32:34 PM
Neil (nospam@nospam.net) writes:
>> You could run DBCC SHOWCONTIG on some big tables to have a look at
>> fragmentation. But if you reindex every night, and don't shrink anything,
>> the situation should improve.
> 
> I ran it on the five largest tables in the db (based on the bytes used,
> not the pages). The Extent Scan Fragmentation for those five tables was:
> 86%, 70%, 97%, 98%, 74%. 

That's quite high. What values do you have for Scan Density and Avg. Page 
Density (full)? Or post the full output for one these tables.

> Another thing that might be a factor here is that I noticed that disk
> space is very low on the drive that contains the database. I have been
> keeping 4 weeks worth of backups for this database. When the backup
> ballooned to 3 gb instead of 1 gb, it started eating up more drive
> space. Last night's backup (after, I think, 5 of these 3 gb backups were
> on the drive) wouldn't go through, as there was only 2 gb free on the
> drive. I deleted some backups, and changed the storage to 1 week,
> instead of 4, and the backup went through. Still, I wonder how much the
> limited disk space is affecting the current situation. 

The limited disk space is not going to affect your database, but
obviously the expanded database will affect the available disk
space. Then again, you are not storing the backups on the same drive
are you? (Well, if you get them on tape or some other media as well,
I guess it's OK.)

>> Or put in another way: do you have tables without a clustred index?
> 
> Yes, apparently so. The query returned 51 rows.

And that included your big tables? 

I think we on to something here. Your "optimization job" is in vain;
you cannot defragment a heap (heap = table without a clustered index).

Adding clustered indexes on all tables takes a little thought to
make the best choice for the index. Then again, if you only have
one index on a table, that is probably the best choice.

But you could also just add a clustered index on any column and 
then drop it. That will defragment the table.

While I know a few who disagrees, I think it's best practice to 
have a clustered index on all tables.


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/27/2007 10:26:31 PM
"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A13EFC4CFA19Yazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>>> You could run DBCC SHOWCONTIG on some big tables to have a look at
>>> fragmentation. But if you reindex every night, and don't shrink 
>>> anything,
>>> the situation should improve.
>>
>> I ran it on the five largest tables in the db (based on the bytes used,
>> not the pages). The Extent Scan Fragmentation for those five tables was:
>> 86%, 70%, 97%, 98%, 74%.
>
> That's quite high. What values do you have for Scan Density and Avg. Page
> Density (full)? Or post the full output for one these tables.

Here's the full output. Will reply to the rest later (am out the door...). 
Thanks!

DBCC SHOWCONTIG scanning 'CustomerMerges' table...

Table: 'CustomerMerges' (709733731); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 725

- Extents Scanned..............................: 96

- Extent Switches..............................: 95

- Avg. Pages per Extent........................: 7.6

- Scan Density [Best Count:Actual Count].......: 94.79% [91:96]

- Extent Scan Fragmentation ...................: 86.46%

- Avg. Bytes Free per Page.....................: 404.7

- Avg. Page Density (full).....................: 95.00%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'ImageFilesProcessed' table...

Table: 'ImageFilesProcessed' (1992550332); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1695

- Extents Scanned..............................: 214

- Extent Switches..............................: 213

- Avg. Pages per Extent........................: 7.9

- Scan Density [Best Count:Actual Count].......: 99.07% [212:214]

- Extent Scan Fragmentation ...................: 70.09%

- Avg. Bytes Free per Page.....................: 375.4

- Avg. Page Density (full).....................: 95.36%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'CustActivity' table...

Table: 'CustActivity' (1006730739); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 270

- Extents Scanned..............................: 38

- Extent Switches..............................: 37

- Avg. Pages per Extent........................: 7.1

- Scan Density [Best Count:Actual Count].......: 89.47% [34:38]

- Extent Scan Fragmentation ...................: 97.37%

- Avg. Bytes Free per Page.....................: 392.2

- Avg. Page Density (full).....................: 95.15%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'CustomerEvents' table...

Table: 'CustomerEvents' (1029734871); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 320

- Extents Scanned..............................: 46

- Extent Switches..............................: 45

- Avg. Pages per Extent........................: 7.0

- Scan Density [Best Count:Actual Count].......: 86.96% [40:46]

- Extent Scan Fragmentation ...................: 97.83%

- Avg. Bytes Free per Page.....................: 376.9

- Avg. Page Density (full).....................: 95.34%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'ImageFileErrors' table...

Table: 'ImageFileErrors' (69067482); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1936

- Extents Scanned..............................: 243

- Extent Switches..............................: 242

- Avg. Pages per Extent........................: 8.0

- Scan Density [Best Count:Actual Count].......: 99.59% [242:243]

- Extent Scan Fragmentation ...................: 74.49%

- Avg. Bytes Free per Page.....................: 344.1

- Avg. Page Density (full).....................: 95.75%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.


0
nospam52 (1566)
12/27/2007 11:36:50 PM
Neil (nospam@nospam.net) writes:
> Here's the full output. Will reply to the rest later (am out the door...). 

OK, the numbers looks good, beside the extent scan fragmentation that is.
But it's not that your pages are half-full or something like that.


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/28/2007 8:31:09 AM
>>> Or put in another way: do you have tables without a clustred index?
>>
>> Yes, apparently so. The query returned 51 rows.
>
> And that included your big tables?
>
> I think we on to something here. Your "optimization job" is in vain;
> you cannot defragment a heap (heap = table without a clustered index).
>
> Adding clustered indexes on all tables takes a little thought to
> make the best choice for the index. Then again, if you only have
> one index on a table, that is probably the best choice.
>
> But you could also just add a clustered index on any column and
> then drop it. That will defragment the table.
>
> While I know a few who disagrees, I think it's best practice to
> have a clustered index on all tables.


OK, yes, none of the five tables I previously reported on had clustered 
index. I had created primary keys on those tables, but not clustered 
indexes. For some reason, I think I misunderstood this blurb from BOL:

"PRIMARY KEY constraints create clustered indexes automatically if no 
clustered index already exists on the table and a nonclustered index is not 
specified when you create the PRIMARY KEY constraint."

I think I assumed that the clustered index would automatically be created on 
the pk index.

OK, so I added clustered indexes to those five tables (either on the primary 
key or on an index that I felt approximated the order that the data would be 
used in; or, in the case of child tables, on the foreign key field). I also 
added clustered indexes on about 10 other tables that stood out as tables of 
significant size.

For the five large tables, the before->after on Reserved was as follows:

CustomerMerges: 226661->959
ImageFilesProcessed: 72333->1727
CustActivity: 38034->303
CustomerEvents: 28018->367
ImageFileErrors: 24391->2046

So, some very dramatic changes there!

In terms of fragmentation, also some very dramatic changes. Here is the 
"after" data for those five tables:


DBCC SHOWCONTIG scanning 'CustomerMerges' table...

Table: 'CustomerMerges' (709733731); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 952

- Extents Scanned..............................: 119

- Extent Switches..............................: 118

- Avg. Pages per Extent........................: 8.0

- Scan Density [Best Count:Actual Count].......: 100.00% [119:119]

- Logical Scan Fragmentation ..................: 0.00%

- Extent Scan Fragmentation ...................: 6.72%

- Avg. Bytes Free per Page.....................: 12.3

- Avg. Page Density (full).....................: 99.85%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'ImageFilesProcessed' table...

Table: 'ImageFilesProcessed' (1992550332); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1763

- Extents Scanned..............................: 221

- Extent Switches..............................: 220

- Avg. Pages per Extent........................: 8.0

- Scan Density [Best Count:Actual Count].......: 100.00% [221:221]

- Logical Scan Fragmentation ..................: 0.00%

- Extent Scan Fragmentation ...................: 0.90%

- Avg. Bytes Free per Page.....................: 48.1

- Avg. Page Density (full).....................: 99.41%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'CustActivity' table...

Table: 'CustActivity' (1006730739); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 277

- Extents Scanned..............................: 35

- Extent Switches..............................: 34

- Avg. Pages per Extent........................: 7.9

- Scan Density [Best Count:Actual Count].......: 100.00% [35:35]

- Logical Scan Fragmentation ..................: 0.00%

- Extent Scan Fragmentation ...................: 2.86%

- Avg. Bytes Free per Page.....................: 41.6

- Avg. Page Density (full).....................: 99.49%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'CustomerEvents' table...

Table: 'CustomerEvents' (1029734871); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 337

- Extents Scanned..............................: 43

- Extent Switches..............................: 42

- Avg. Pages per Extent........................: 7.8

- Scan Density [Best Count:Actual Count].......: 100.00% [43:43]

- Logical Scan Fragmentation ..................: 0.00%

- Extent Scan Fragmentation ...................: 6.98%

- Avg. Bytes Free per Page.....................: 38.6

- Avg. Page Density (full).....................: 99.52%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'ImageFileErrors' table...

Table: 'ImageFileErrors' (69067482); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1968

- Extents Scanned..............................: 246

- Extent Switches..............................: 245

- Avg. Pages per Extent........................: 8.0

- Scan Density [Best Count:Actual Count].......: 100.00% [246:246]

- Logical Scan Fragmentation ..................: 2.74%

- Extent Scan Fragmentation ...................: 23.58%

- Avg. Bytes Free per Page.....................: 68.1

- Avg. Page Density (full).....................: 99.16%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



I then ran the "sp_spaceused NULL, true" command again. But the results were 
pretty much the same as they had been previously. I then ran the 
optimization job and then reran the "sp_spaceused NULL, true" command, and 
the results were still pretty much the same.



In any case, there does seem to be much improvement by adding the clustered 
index. So thank you for that!



I was wondering if there's a way to have the query: SELECT * FROM sysindexes 
WHERE indid = 0 list the actual table names instead of the IDs of the tables 
without clustered indexes?


Thanks, Erland!



Neil


0
nospam52 (1566)
12/28/2007 11:46:48 AM
Neil (nospam@nospam.net) writes:
> "PRIMARY KEY constraints create clustered indexes automatically if no 
> clustered index already exists on the table and a nonclustered index is
> not specified when you create the PRIMARY KEY constraint." 
> 
> I think I assumed that the clustered index would automatically be
> created on the pk index. 

That's correct. If you say 

   CREATE TABLE alfons(a int NOT NULL, 
                       b int NOT NULL,
                       c int NOT NULL,
                       CONSTRAINT pk_alfons PRIMARY KEY(a, b, c))

The primary will indeed be clustered. You need to include NONCLUSTERED
to have it nonclustered.

Possibly if you created the tables and keys through some table designer,
you may have gotten non-clustered PKs.

> For the five large tables, the before->after on Reserved was as follows:
> 
> CustomerMerges: 226661->959
> ImageFilesProcessed: 72333->1727
> CustActivity: 38034->303
> CustomerEvents: 28018->367
> ImageFileErrors: 24391->2046
> 
> So, some very dramatic changes there!

Indeed, but...
 
> I then ran the "sp_spaceused NULL, true" command again. But the results
> were pretty much the same as they had been previously. I then ran the 
> optimization job and then reran the "sp_spaceused NULL, true" command,
> and the results were still pretty much the same. 

One would have expected the numbers here to fall as well. 

And there is another thing that is fishy as well. The "Pages Scanned"
in the old output was nowhere close to the numbers for Reserved. Now,
Reserved also includes non-clustered indexes, but I can't see how an
non-clustered index could be that badly fragmented when you run 
defragmentation regularly.

So I am a little uncertain that you really gained that much. Maybe
something is flaky with SQL 7. But if you really gained that much space
by adding clustering indexes - then I would say that you really have a 
case for shrinking. But that would be if sp_spaceused reported over
1GB of space unallocated.


> I was wondering if there's a way to have the query: SELECT * FROM
> sysindexes WHERE indid = 0 list the actual table names instead of the
> IDs of the tables without clustered indexes? 
 
object_name(id). But the name of the "index" is the table name for
indid = 0.


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/28/2007 10:45:41 PM
> Possibly if you created the tables and keys through some table designer,
> you may have gotten non-clustered PKs.

I believe that's what happened. When I originally converted the old MDB file
to SQL Server, DTS created clusted PKs, as well as timestamp fields. The
tables I created since have been created through an Access ADP file, which,
apparently doesn't create clustered PKs (nor timestamp fields).

>> I then ran the "sp_spaceused NULL, true" command again. But the results
>> were pretty much the same as they had been previously. I then ran the
>> optimization job and then reran the "sp_spaceused NULL, true" command,
>> and the results were still pretty much the same.
>
> One would have expected the numbers here to fall as well.
>

There were still 37 tables that didn't have clustered indexes. Most of them 
were tiny lookup tables. Nevertheless, I added clustered indexes to all 
tables, except for seven that were created by sql server than had no indexes 
at all. These were:

sysfiles1

sysindexkeys

sysforeignkeys

sysmembers

sysprotects

sysfiles

Trace_References



After creating all the clustered indexes, I noticed that the tables that 
were taking up the most space were the ones that I was expecting to (the 
previous "top five" were surprises to me that they were at the top of the 
list). So things seem much more how I would have expected them now, in terms 
of which tables are taking up the most amount of space.



Nevertheless, running "sp_spaceused NULL, true" still didn't reveal any 
changes. Also ran optimization and reran it again, but still no change.



So this gets back to my original question. If there is about 1 GB of data in 
the db; and the database is showing 2.5 GB of data; then where's that other 
1.5 GB of data coming from? Fragmentation seems to have been reduced 
greatly. So what is that other 1.5 GB that is being reported? This is very 
strange.


> And there is another thing that is fishy as well. The "Pages Scanned"
> in the old output was nowhere close to the numbers for Reserved. Now,
> Reserved also includes non-clustered indexes, but I can't see how an
> non-clustered index could be that badly fragmented when you run
> defragmentation regularly.
>
> So I am a little uncertain that you really gained that much. Maybe
> something is flaky with SQL 7. But if you really gained that much space
> by adding clustering indexes - then I would say that you really have a
> case for shrinking. But that would be if sp_spaceused reported over
> 1GB of space unallocated.

Yeah, there was still only about 300 MB unallocated. But I ran the shrink 
database anyway. No change with  sp_spaceused.

So, I dont know. This is very strange -- especially the way it just happened 
overnight like that.

Thanks,

Neil 


0
nospam52 (1566)
12/29/2007 6:57:10 AM
Neil (nospam@nospam.net) writes:
> So this gets back to my original question. If there is about 1 GB of
> data in the db; and the database is showing 2.5 GB of data; then where's
> that other 1.5 GB of data coming from? Fragmentation seems to have been
> reduced greatly. So what is that other 1.5 GB that is being reported?
> This is very strange. 
 
I'm afraid that by now I have exhausted all ideas. With the drastic
reduction of CustomerMerges, one would expect to see a big increase in
unallocated space. 226661 pages is 1.8 GB. I suspect that the number
226661 is bogus somehow.

The only thing I can suggest is that you create a new database from scripts,
and the load it with data from the current database. It may not be worth
the hassle though.

-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/29/2007 9:53:31 AM
On 29.12.2007 10:53, Erland Sommarskog wrote:
> Neil (nospam@nospam.net) writes:
>> So this gets back to my original question. If there is about 1 GB of
>> data in the db; and the database is showing 2.5 GB of data; then where's
>> that other 1.5 GB of data coming from? Fragmentation seems to have been
>> reduced greatly. So what is that other 1.5 GB that is being reported?
>> This is very strange. 
>  
> I'm afraid that by now I have exhausted all ideas. With the drastic
> reduction of CustomerMerges, one would expect to see a big increase in
> unallocated space. 226661 pages is 1.8 GB. I suspect that the number
> 226661 is bogus somehow.
> 
> The only thing I can suggest is that you create a new database from scripts,
> and the load it with data from the current database. It may not be worth
> the hassle though.

But if Neil switches SQL Server versions during the process (he is still 
on 7 with two newer versions) it might still be worthwhile (proper 
licenses available of course).  Or would you rather disregard that 
option for compatibility reasons?

Kind regards

	robert
0
shortcutter (5831)
12/29/2007 5:49:34 PM
Robert Klemme (shortcutter@googlemail.com) writes:
> But if Neil switches SQL Server versions during the process (he is still 
> on 7 with two newer versions) it might still be worthwhile (proper 
> licenses available of course).  Or would you rather disregard that 
> option for compatibility reasons?
 
There is some point in doing the migration by rebuild and scripting, 
as there some funky behaviours with migrated databases. However, these
are not very serious. (The only one I recall is that 
sys.dm_os_buffer_descriptors may give bogus values.) So it still 
questionable whether it's worth the hassle. 


Funny enough, though, when we have upgraded our customers from SQL 2000 to 
SQL 2005 we have take this path at most sites. But we have our own special 
reason: we wanted to move away from ANSI_NULLS off, and the only suported 
way to do this is to recreate all tables, as there is an ANSI_NULLS flag 
setting per table. The only way to change it is to update a status bit
in sysobjects, which is not to recommend. (But we still did this at some
sites where the database size precluded a reload.)


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
esquel (7068)
12/29/2007 6:52:34 PM
Reply:
Similar Artilces:

Opening Another Database from WITHIN a Database
I have a database (2000.mdb) which links to my data tables (Data2000.mdb). The database has a Main Menu screen (not a switchboard - a custom one). I need to import records from 5 text files into the 5 different Tables in Data2000.mdb. Here was my plan: Create various Import specs in Data2000.mdb Create a Macro (in Data2000.mdb) which calls each of the specs sequentially. Place a button on the Main Menu (2000.mdb), which closes the current database (optional), opens Data2000.mdb, runs the Macro (Import Reg), then returns the user to the original Menu in the original database (200...

[News] Embedded Database Gets Open Source Operating System
Open-source RTOS adds in-memory DB ,----[ Quote ] | McObject has port its in-memory embedded database to the open-source | eCos RTOS (real-time operating system)... `---- http://www.linuxdevices.com/news/NS9842503412.html ...

database #2 1464409
I am very new with Microsoft Databases, so know very little. This may be a dumb question, but need help. Here's my question - I emailed myself a database from work to home to do some of my work at home over the long weeken, my home version is MSAccess97, my work version is Office2000. Is there a website that will convert this database from 2000 to 97 for me.? Open your database to the database window. Click on Tools - Database Utilities - Convert and follow the directions to convert your database to Access97. -- PC Datasheet Your Resource For ...

US-CT: Norwalk-Applications/Database Programmer
************************************************************** JobCircle.com is a regional job board serving all industries and occupations in the PA, NJ, DE, NY, MD, and DC areas. We offer a regional job board, online learning, and regional job fair events for jobseekers who live in our areas of focus. To Learn More: Job Board: http://www.JobCircle.com?source=ng eLearning: http://www.JobCircle.com/courseware?source=ng Job Fairs: http://JobCircleJobFairs.com?source=ng ************************************************************** Job Title: Applications/Database...

FAQ 8.8 How do I get the screen size? #18
This is an excerpt from the latest version perlfaq8.pod, which comes with the standard Perl distribution. These postings aim to reduce the number of repeated questions as well as allow the community to review and update the answers. The latest version of the complete perlfaq is at http://faq.perl.org . -------------------------------------------------------------------- 8.8: How do I get the screen size? If you have "Term::ReadKey" module installed from CPAN, you can use it to fetch the width and height in characters and in pixels: use Term::ReadKey; ...

Using Database Passwords
I am trying to use a password protected Access 2000 database in a VB application but I keep getting error messages that the password supplied is incorrect (even though I know it is not). Please advise what is wrong in the following. Thanks for any help. Dim db As DAO.Database Dim strPwd As String strPwd = "Mysecret" Set db = DAO.OpenDatabase(App.Path & "\MyData.mdb",,,"OBDC; pwd=strPwd") Try Set db = DAO.OpenDatabase(App.Path & "\MyData.mdb",,,"OBDC; pwd=" & strPwd) "Petey" <nospam.pjasak@v...

Looking for a database....
Hello, I am looking for a Database for managing sales.... I did use in the past a Siebel, but we can't afford such product... I found really interesting the fact to see rapidely for one opportunity (for a specific company), the list of people (not necessery in the company) involved for this Opty. You can see all your opty "in progress" Is there someone who can advise me a similar database ??? Thanks in advance S Stephane M wrote: > Hello, > > I am looking for a Database for managing sales.... > > I did use in the past a Siebel, but...

US-PA: West Chester-Oracle DBA "Database Administrator"
************************************************************** JobCircle.com is a regional job board serving all industries and occupations in the PA, NJ, DE, NY, MD, and DC areas. We offer a regional job board, online learning, and regional job fair events for jobseekers who live in our areas of focus. To Learn More: Job Board: http://www.JobCircle.com?source=ng eLearning: http://www.JobCircle.com/courseware?source=ng Job Fairs: http://JobCircleJobFairs.com?source=ng ************************************************************** Job Title: Oracle D...

Database Database Database Database Software Cheap
Database Database Database Database Software Cheap Great Datase Software See Website Below. Ultra Easy to Learn (Typically 30 Seconds) Professional Programmable Database Ver. 2.3 2.1 Million Record Capacity, (New cond). Search Rate: 2000 / Records / Second. DataBase Type: Random Access. Can Create Unlimited Databases. Programmable fields for any Application. Has Six Seperate Field Sets All Programmable. Build Time One Second, (Auto Creates DB). Setup Time: Instantly, Just Enter DB Name. Ultra Cheap Price, Special $20, Paypal Accepted. Application Mailed Instantly (file Attached Em...

database management software ( DBSmart)
Smart Datasys INC Dear Sir/Madam, We are software development and database management company based in USA and having branch in Pune (India) We Launch database Management Software(DBSmart) FREE DOWNLOAD DBSmrt http://www.smartdatasys.com/download/dbsmarttrial.html DBSmart - One tool does it all. Database Management Database Browsing and Reverse Engineering Automate Backup and Restore with timestamp SQL/PL SQL Editor to execute queries Execute Query across all the servers and get combined result Create BMP file of the screen Reverse Engineer DDLs and save them into individual files to main...

Database issues with SOAP::LITE
Hi All, I am creating a web service in soap environment from which i am inserting some data into database written in perl. Whenever i use the DBD::Oracle module, i am getting 500 internal server error. Can anyone help me out wher i am doing wrong? Thanks & Regards Chinmoy. chinu <dchinmoy2003@gmail.com> wrote: > Hi All, > > I am creating a web service in soap environment from which i am > inserting some data into database written in perl. > > Whenever i use the DBD::Oracle module, i am getting 500 internal > server error. Is there any reason to think that...

start a database application
how to start a database aplication using delphi? i have delphi 7, where i can find tutorial about delphi database ? thanks There are many books on Delphi programming. Check www.amazon.com and search 'books' for 'delphi programming'. You will find many with high user ratings (4-5 stars). Many of these books are a step-by-step guide for writing an application. Also, Delphi ships with a directory of sample applications. Study these to see how to use specific objects. Many of us in this newsgroup are willing to help with specific questions. Let us know what your ...

Re: Retrospective study, sample size issue
On Wed, 1 Oct 2008 10:04:41 -0700, Sassy <AugustinaO@GMAIL.COM> wrote: >I'm trying to conduct a study that will compare the mortality rate as >well as other outcomes in patients with e.g. type 1 diabetes and type >2 diabetes. I calculated the sample size using mortality proportions >from a clinical trial. We wanted a sample size that has 80% power >and >will allow us to detect a significant difference between the two >groups. My only concern is that in a database study, the mortality >rates might be different and also the sample size i obtained was >pret...

disk crash, DBMS database error?
I have been contacted by a Manman Dec customer site desperate for help. They are not Camus members but I would like to try and help them out. They are running a Microvax 3100, VMS 5.5, DBMS 5.1, Manman 8.4 (yes there are still some out there). It seems that they had a disk crash on the disk holding their databases. They have replaced the disk and done a restore and Manman opens up ok. They can run reports and lists however whenever they run a transaction that writes data they whole system hangs. The only option they have then is to switch the system off and back on again. There is no obvio...

database tables and relationships
Hi, Just wondering if any one can help with my table layout and relationship design for my database, the total design is not quite finished as you can probably see, Any help would be greatly appreciated..... 1. The purpose of this database will be to store race results for drag racers this will include specifications of there cars, car setups, engine specs, track conditions, weather conditions and race results. I would like to be able to look up different weather conditions from previous events so that i can set the car and engine up similar as to keep race times as consistent as possible. ...

No pixel size information on PET/CT magicview cd
Greetings, We've received a CD of a PET/CT scan in which all of the dicom slices seem to be secondary images with no information about the pixel spacing, slice thickness, slice ordering etc. They seem to be just an RGB image with the patient/institution information tacked on the top. We need to have the 3-D information so we can fuse the images with some current scans, and just can't find it anywhere in the DICOM headers, nor apparently in the DICOMDIR file in the root of the CD (which seems to be all the headers catted together. HOWEVER, when you view the series using ...

Restore database to a certain point of time.
I will make it simpler to look... I have DB1 - as backup for day 1 LOg1 as backup of logs T1 T2 T3 T4 T5 ...some transaction on day 2 Now i backup again DB2 Log2 I want to restore the database till the point of transaction T3 say. I know the time or i assume a certain time. Is this possible .....i tried several options but hand in between for some reason or the other. How can i achieve my solution. Is there some extra parameter i will require or what....i am wondering now that it is not at all possible. Please help. RVG If possible guys can you please mail me the sloution on raj...

database access
is it possible for me to get the details of a database or can access the database available on net by using matlab "vidyadharavemuri" <vvidyadhara@gmail.com> wrote in message news:ef5ce4d.-1@webcrossing.raydaftYaTP... > is it possible for me to get the details of a database or can access > the database available on net by using matlab Use the Database Toolbox: http://www.mathworks.com/products/database/ -- Steve Lord slord@mathworks.com ...

File.read(fname) vs. File.read(fname,File.size(fname))
Hi, this is my first post on ruby-forum. Hope this is useful to someone! I have learned from experience to avoid reading files using File.read(filename)... it gives terrible performance on even moderately large files. Reading large files line by line is much faster, and uses much less memory. However, there are cases when you do want the entire file in a single string. I just discovered that you can do this MUCH faster with File.read(filename,File.size(filename))... check this out: > File.size(bigfile) # not really that big... just 10 MB => 10531519 > Benchmark.bm do |bm|...

split database question
hey... is it possible to add a command button to email the back half of a split database using sendobject or some other method? if not specifically targeting the back half of the db, can a specific file name be hard-coded to be automatically emailed on a command button (say, if the user was to zip the _be file in the 'my documents' folder, then use the button to email it)? thanks. -- Greg Message posted via http://www.accessmonster.com ...

Replication
We have a SQLSERVER database that is replicated to many users. We are currently in an expansion phase where we need to make changes to the server database. Each time we rollout a new release, we are deleting the local replicating database and recreating. Is there any way to automatically transfer the changes from the server to existing local database without deleting? dipillama@nswccd.navy.mil (Michael diPilla) wrote in message news:<4e67e68b.0308181234.4c8e5e98@posting.google.com>... > We have a SQLSERVER database that is replicated to many users. > We are currently in an expan...

Database doubles in size, when forms deleted even when compacted!
Here's a strange one. I made a modification to a database recently, and wound up getting an error message when trying to make an mde file. I'm using Access XP/2002, and the error message I got was "Microsoft Access was unable to create an mde database". After researching the error and the solutions, I've run the gamut of solutions (or comply with the requirements) when I came across the "maximum table ids" issue which can cause this error. Although the database is quite large, I can't really see that being an issue since I just made an mde out of t...

Converting database from COBOL database to DBF
I found some important data stored as COBOL database files with EFT, FIN and KEY extensions. It seems that data is stored as fixed rows like in DBF files. Does anybody have a procedure or routine for converting such data. Dear Zeljko: On Sep 12, 6:20=A0am, Zeljko <astin...@gmail.com> wrote: > I found some important data stored as COBOL > database files with EFT, FIN and KEY extensions. > It seems that data =A0is stored as fixed rows like > in DBF files. Does anybody have a procedure or > routine for converting such data. Looks like you can possibly connect...

RB Plugins folder size matter?
I put a ton of plugins in my RB plugins folder. If an app I compile isn't using them, does it increase the size of the executable for some reason anyways? Is there a reason not to put a ton of plugins in there? Does it slow anything down? Thanks. Jason Harrison <searchbuffet@yahoo.com> wrote: > I put a ton of plugins in my RB plugins folder. If an app I compile > isn't using them, does it increase the size of the executable for some > reason anyways? no. > Is there a reason not to put a ton of plugins in there? no. > Does it slow anything down...

Re: lost database
This is a multi part message in MIME format. --_NextPart_1_qmZrHLajoetbkwlTZTViemHPfyb Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable I'm assuming you are using pg_dumpall for your backups. In that case, i'd= =20 restore the instance somewhere else, and pg_dump the one database. At 09:12 AM 12/9/2003, Jodi Kanter wrote: >I am working with someone who has been running creating and maintaining=20 >his own database. He somehow managed to blow away his database and needs= =20 >to restore. However, he has not bee...