Database Tripled In Size!!

  • Permalink
  • submit to reddit
  • Email
  • Follow


Yikes! My database, which had been consistently 1 gig for a long time, went 
from being 1 gig to 3 gigs overnight! Looking at the nightly backups, the 
database increased on average about 5-15 MB per day, and was 1.06 GB on the 
Thursday night backup. Then, with the Friday night backup, it was 2.95 GB, 
and has stayed that way since!

I did a Shrink on the database, but that didn't help the situation.

The only thing I could think it might relate to is the following. I 
previously (about a week ago) changed a couple of tables' DateModified field 
from smalldatetime to datetime. (I posted about this under a separate thread 
here.) For some reason I was getting occasional errors which I believe might 
have been related to the new data type. So I decided to change the data 
types back to smalldatetime.

I made the table changes Thursday night, right before the backup, but after 
the database optimizations. The backup Thursday night still shows the small 
database size. But the backup Friday night has the large size.

So this might not be related to the table changes at all. But I know for a 
fact that there isn't 3x the data in the database. Somehow the database is 
bloated to 3x its size, and it's not from actual data.

Any ideas about what I can do?

Thanks!

Neil 


0
Reply nospam52 (1566) 12/24/2007 1:37:57 AM

See related articles to this posting


On Mon, 24 Dec 2007 01:37:57 GMT, "Neil" <nospam@nospam.net> wrote:

Check Database Properties > Files Page. It will show you how much the
db should grow once it fills up. 1MB by default, but you might have
changed it..

3GB is still very small, and hard disk space is cheap.

-Tom.



>Yikes! My database, which had been consistently 1 gig for a long time, went 
>from being 1 gig to 3 gigs overnight! Looking at the nightly backups, the 
>database increased on average about 5-15 MB per day, and was 1.06 GB on the 
>Thursday night backup. Then, with the Friday night backup, it was 2.95 GB, 
>and has stayed that way since!
>
>I did a Shrink on the database, but that didn't help the situation.
>
>The only thing I could think it might relate to is the following. I 
>previously (about a week ago) changed a couple of tables' DateModified field 
>from smalldatetime to datetime. (I posted about this under a separate thread 
>here.) For some reason I was getting occasional errors which I believe might 
>have been related to the new data type. So I decided to change the data 
>types back to smalldatetime.
>
>I made the table changes Thursday night, right before the backup, but after 
>the database optimizations. The backup Thursday night still shows the small 
>database size. But the backup Friday night has the large size.
>
>So this might not be related to the table changes at all. But I know for a 
>fact that there isn't 3x the data in the database. Somehow the database is 
>bloated to 3x its size, and it's not from actual data.
>
>Any ideas about what I can do?
>
>Thanks!
>
>Neil 
>
0
Reply no.spam.tom7744 (2731) 12/24/2007 6:39:38 AM

>>Yikes! My database, which had been consistently 1 gig for a long time, 
>>went
>>from being 1 gig to 3 gigs overnight! Looking at the nightly backups, the
>>database increased on average about 5-15 MB per day, and was 1.06 GB on 
>>the
>>Thursday night backup. Then, with the Friday night backup, it was 2.95 GB,
>>and has stayed that way since!
>>
>>I did a Shrink on the database, but that didn't help the situation.
>>
>>The only thing I could think it might relate to is the following. I
>>previously (about a week ago) changed a couple of tables' DateModified 
>>field
>>from smalldatetime to datetime. (I posted about this under a separate 
>>thread
>>here.) For some reason I was getting occasional errors which I believe 
>>might
>>have been related to the new data type. So I decided to change the data
>>types back to smalldatetime.
>>
>>I made the table changes Thursday night, right before the backup, but 
>>after
>>the database optimizations. The backup Thursday night still shows the 
>>small
>>database size. But the backup Friday night has the large size.
>>
>>So this might not be related to the table changes at all. But I know for a
>>fact that there isn't 3x the data in the database. Somehow the database is
>>bloated to 3x its size, and it's not from actual data.
>>
>>Any ideas about what I can do?
>>
>>Thanks!
>>
>>Neil
>>
"Tom van Stiphout" <no.spam.tom7744@cox.net> wrote in message 
news:sukum3hs6q2hmn7hrtvje819ur81jna20c@4ax.com...
> On Mon, 24 Dec 2007 01:37:57 GMT, "Neil" <nospam@nospam.net> wrote:
>
> Check Database Properties > Files Page. It will show you how much the
> db should grow once it fills up. 1MB by default, but you might have
> changed it..
>
> 3GB is still very small, and hard disk space is cheap.
>
> -Tom.
>
>
>

In File Properties (which I've never touched), it's set to Automatically 
grow file by 10%, with unrestricted filegrowth.

And I realized that 3 GB isn't that large. Still, the fact remains that it 
couldn't have tripled in size overnight though data entry. So there has to 
be something else going on there. If there's bloating, then it would be good 
to get rid of it. But, as noted, Shrink Database doesn't bring it back down. 
So I don't know what's going on.


0
Reply nospam52 (1566) 12/24/2007 7:29:01 AM

"Neil" <nospam@nospam.net> wrote in message 
news:V3Ebj.22$L22.18@newssvr11.news.prodigy.net...
> Yikes! My database, which had been consistently 1 gig for a long time, 
> went from being 1 gig to 3 gigs overnight! Looking at the nightly backups, 
> the database increased on average about 5-15 MB per day, and was 1.06 GB 
> on the Thursday night backup. Then, with the Friday night backup, it was 
> 2.95 GB, and has stayed that way since!
>
> I did a Shrink on the database, but that didn't help the situation.
>
> The only thing I could think it might relate to is the following. I 
> previously (about a week ago) changed a couple of tables' DateModified 
> field from smalldatetime to datetime. (I posted about this under a 
> separate thread here.) For some reason I was getting occasional errors 
> which I believe might have been related to the new data type. So I decided 
> to change the data types back to smalldatetime.
>
> I made the table changes Thursday night, right before the backup, but 
> after the database optimizations. The backup Thursday night still shows 
> the small database size. But the backup Friday night has the large size.
>
> So this might not be related to the table changes at all. But I know for a 
> fact that there isn't 3x the data in the database. Somehow the database is 
> bloated to 3x its size, and it's not from actual data.
>
> Any ideas about what I can do?
>
> Thanks!
>
> Neil
>


Here's one thought I had: is it possible to look at a breakdown of the 
objects in the database, and how much space each one is taking up? Perhaps 
that would help to determine what's going on here. 


0
Reply nospam52 (1566) 12/24/2007 7:40:02 AM

Neil (nospam@nospam.net) writes:
> So this might not be related to the table changes at all. But I know for a 
> fact that there isn't 3x the data in the database. Somehow the database is 
> bloated to 3x its size, and it's not from actual data.
> 
> Any ideas about what I can do?
 
A very simple advice is: nothing. 3GB is not a very big database, and 
I would not lose sleep over a database growing from 1GB to 3GB. Well,
maybe if I were on Express where there is a size limit, but I know you
aren't.

If you really want to find out what happened, first examine whether
it's the log file or the data that have expanded. sp_helpdb gives you
that information.

If the log file has expanded, that may be related to your change of
smalldatetime to datetime - but changing back is going to increase the
log again. In this case you should consider shrinking the log. Normally
shrinking the log is not a good idea, because it will grow again, but
if the log has grown because of some exceptional event, it's reasonable
to shrink it.

If the data file has expanded, this query shows usage per table:

   select object_name(id), reserved, used, dpages
   from   sysindexes
   where  indid in (0,1)
   order  by reserved desc
   
The number are in pages of 8192 bytes. "reserved" is the total that is
reserved for the table, including indexes. "used" is how many pages of
these that actually are in use and "dpages" is the number of data pages.

If there is any table that is suspiciously big, check if any indexes has
been added recently. Also run DBCC SHOWCONTIG to see whether there is 
any fragmentation.


But most of all: I wish you a really Merry Christmas, and only look
at your expanded database if you get really bored by Christmas festivities -
which I hope you don't!

-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/24/2007 9:47:02 AM

"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A106EF2E30A4Yazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>> So this might not be related to the table changes at all. But I know for 
>> a
>> fact that there isn't 3x the data in the database. Somehow the database 
>> is
>> bloated to 3x its size, and it's not from actual data.
>>
>> Any ideas about what I can do?
>
> A very simple advice is: nothing. 3GB is not a very big database, and
> I would not lose sleep over a database growing from 1GB to 3GB. Well,
> maybe if I were on Express where there is a size limit, but I know you
> aren't.
>
> If you really want to find out what happened, first examine whether
> it's the log file or the data that have expanded. sp_helpdb gives you
> that information.
>
> If the log file has expanded, that may be related to your change of
> smalldatetime to datetime - but changing back is going to increase the
> log again. In this case you should consider shrinking the log. Normally
> shrinking the log is not a good idea, because it will grow again, but
> if the log has grown because of some exceptional event, it's reasonable
> to shrink it.
>
> If the data file has expanded, this query shows usage per table:
>
>   select object_name(id), reserved, used, dpages
>   from   sysindexes
>   where  indid in (0,1)
>   order  by reserved desc
>
> The number are in pages of 8192 bytes. "reserved" is the total that is
> reserved for the table, including indexes. "used" is how many pages of
> these that actually are in use and "dpages" is the number of data pages.
>
> If there is any table that is suspiciously big, check if any indexes has
> been added recently. Also run DBCC SHOWCONTIG to see whether there is
> any fragmentation.
>
>
> But most of all: I wish you a really Merry Christmas, and only look
> at your expanded database if you get really bored by Christmas 
> festivities -
> which I hope you don't!
>
>

Thanks, Erland. Yeah, the log file's only 768 KB, whereas the MDF file is 
3.3 GB.

Also, I ran the query you gave, and all the tables appear to be the sizes 
they should be. At least none seemed very large, large enough to account for 
2 GB.

I appreciate you saying not to worry about it. But, still, how could a 
database that has been steady at 1 GB just all of sudden go from 1 GB to 3 
GB in one fell swoop, for no apparent reason. And, if it did do that (and 
never did anything like that before), wouldn't that mean that performance 
would suffer, if there's 2 GB worth of garbage in there somehow?

Thanks, and I wish you a very Merry Christmas as well. Thanks for you 
continual help!

Neil 


0
Reply nospam52 (1566) 12/24/2007 11:29:35 AM

Neil (nospam@nospam.net) writes:
> Also, I ran the query you gave, and all the tables appear to be the
> sizes they should be. At least none seemed very large, large enough to
> account for 2 GB. 

In that case sp_spaceused for the database should report a lot of free
space.
 
> I appreciate you saying not to worry about it. But, still, how could a 
> database that has been steady at 1 GB just all of sudden go from 1 GB to 3 
> GB in one fell swoop, for no apparent reason. And, if it did do that (and 
> never did anything like that before), wouldn't that mean that performance 
> would suffer, if there's 2 GB worth of garbage in there somehow?

Do you run a regular maintenance job on the database that defragments
indexes? It might be that when you changed those columns to datetime from
smalldatetime, the tables had to be build entirely on new ground. That
is, all tables were moved and to get space to move them, the database
exploded.

It is not likely that this will cause any performance problems. Trying
to shrink the database may on the other hand, as shrinking leads to
fragmentation. To truly shrink it, you would have to rebuild from 
scripts and reload. Definitely not worth it.
 


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/24/2007 2:31:19 PM

"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A109F25D9F96Yazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>> Also, I ran the query you gave, and all the tables appear to be the
>> sizes they should be. At least none seemed very large, large enough to
>> account for 2 GB.
>
> In that case sp_spaceused for the database should report a lot of free
> space.
>

It shows about half a gig of unused space. Here's the printout:

database_size      unallocated space
--------------------------------------
3355.75 MB         -2571.25 MB

reserved           data               index_size         unused
------------------ ------------------ ------------------ ------------------ 
6069248 KB         2477728 KB         3066760 KB         524760 KB


>> I appreciate you saying not to worry about it. But, still, how could a
>> database that has been steady at 1 GB just all of sudden go from 1 GB to 
>> 3
>> GB in one fell swoop, for no apparent reason. And, if it did do that (and
>> never did anything like that before), wouldn't that mean that performance
>> would suffer, if there's 2 GB worth of garbage in there somehow?
>
> Do you run a regular maintenance job on the database that defragments
> indexes?

The maintenance job that is run nightly performs the following:

Optimizations tab:

Reorganize data and index pages
     (change free space per page percentage to 10%)

Remove unused space from database files
    (shrink database when it grows beyond 50 MB)
    (amount of free space to remain after shrink: 10% of the data space)

Integrity tab:

Check database integrity
    (include indexes)
        (attempt to repair any minor problems)

Thanks!

Neil


> It might be that when you changed those columns to datetime from
> smalldatetime, the tables had to be build entirely on new ground. That
> is, all tables were moved and to get space to move them, the database
> exploded.
>
> It is not likely that this will cause any performance problems. Trying
> to shrink the database may on the other hand, as shrinking leads to
> fragmentation. To truly shrink it, you would have to rebuild from
> scripts and reload. Definitely not worth it.
>
>
>
> -- 
> Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
>
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx 


0
Reply nospam52 (1566) 12/24/2007 9:07:51 PM

Neil (nospam@nospam.net) writes:
> It shows about half a gig of unused space. Here's the printout:
> 
> database_size      unallocated space
> --------------------------------------
> 3355.75 MB         -2571.25 MB
> 
> reserved           data               index_size         unused
> ------------------ ------------------ ------------------ -----------------
> 6069248 KB         2477728 KB         3066760 KB         524760 KB

The negative number for unallocated space is spooky. Run it again,
but now like this:

   sp_spaceused NULL, true

That will make sure the values in sysindexes are updated.

By the way, are you still on SQL 7? I seem to recall that you talked
up moving on to SQL 2005, but did that materialise?
 
> The maintenance job that is run nightly performs the following:
> 
> Optimizations tab:
> 
> Reorganize data and index pages
>      (change free space per page percentage to 10%)

That's OK.
 
> Remove unused space from database files
>     (shrink database when it grows beyond 50 MB)
>     (amount of free space to remain after shrink: 10% of the data space)

But remove this one. Shrinking the database is not a good thing to do
on regular terms.
 

-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/24/2007 11:21:57 PM

"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A114FA9BFBDYazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>> It shows about half a gig of unused space. Here's the printout:
>>
>> database_size      unallocated space
>> --------------------------------------
>> 3355.75 MB         -2571.25 MB
>>
>> reserved           data               index_size         unused
>> ------------------ ------------------ ------------------ -----------------
>> 6069248 KB         2477728 KB         3066760 KB         524760 KB
>
> The negative number for unallocated space is spooky. Run it again,
> but now like this:
>
>   sp_spaceused NULL, true
>
> That will make sure the values in sysindexes are updated.
>

OK, here it is:

database_size      unallocated space
------------------ ------------------ 
3355.75 MB         338.75 MB

reserved           data               index_size         unused
------------------ ------------------ ------------------ ------------------ 
3089408 KB         2480768 KB         166896 KB          441744 KB



> By the way, are you still on SQL 7? I seem to recall that you talked
> up moving on to SQL 2005, but did that materialise?

It's still upcoming. Hopefully in the next month or two. You think that 
might make a difference with this situation?

>
>> The maintenance job that is run nightly performs the following:
>>
>> Optimizations tab:
>>
>> Reorganize data and index pages
>>      (change free space per page percentage to 10%)
>
> That's OK.
>
>> Remove unused space from database files
>>     (shrink database when it grows beyond 50 MB)
>>     (amount of free space to remain after shrink: 10% of the data space)
>
> But remove this one. Shrinking the database is not a good thing to do
> on regular terms.

OK, removed it. Is that something I should do periodically?

Thanks!

Neil



>
>
> -- 
> Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
>
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx 


0
Reply nospam52 (1566) 12/25/2007 1:15:37 AM

OK, Erland, here's another strange one.

I took the last backup from before the database ballooned, and I downloaded 
it to my development machine, so as to be able to look at the old vs. new 
data, to see if I missed anything.

After restoring the backup, the database size was 1.5 GB, though it was 
previously 1.0 GB. Going into Shrink Database, it showed that only 1.0 GB 
was used. So I performed a Shrink Database on the old database.

The Shrink Database took a very long time. When it was done, I was told, 
"The database has been shrunk to a size of 2 GB"!!! So it grew instead of 
shrinking!

Looking at the files, the MDF is 1.26 GB, and the LDF is 0.75 GB, for a 
total of 2 GB.

So the database that had been 1 GB grew to 1.5 after backup, and then 2.0 
after shrinking.

This is the backup that was performed right after I made the table changes. 
Remember that I said that I made the changes *after* the optimizations that 
night, but before the backup. The backup file itself is 1.1 GB. It was the 
following night's backup that grew to 3 GB.

So, apparently, making the changes, in and of themselves, didn't balloon the 
database. But sometime after that -- probably the following night's 
optimizations, which included a Shrink Database, caused it to balloon to 3 
GB.

A significant difference between my test and the actual database, though, is 
that when the old database grew to 2 GB, 0.75 GB of that was from the log 
file. With the current 3 GB database, though, the log file is less than a 
MB. The 3 GB is all in the data file.

Weird.

Neil 


0
Reply nospam52 (1566) 12/25/2007 2:58:06 AM

Neil (nospam@nospam.net) writes:
> I took the last backup from before the database ballooned, and I
> downloaded it to my development machine, so as to be able to look at the
> old vs. new data, to see if I missed anything. 

Wait, didn't I tell you celebrate Christmas and relax?!?!?

> After restoring the backup, the database size was 1.5 GB, though it was 
> previously 1.0 GB. Going into Shrink Database, it showed that only 1.0 GB 
> was used. So I performed a Shrink Database on the old database.
> 
> The Shrink Database took a very long time. When it was done, I was told, 
> "The database has been shrunk to a size of 2 GB"!!! So it grew instead of 
> shrinking!

Haven't I told you to stop shrinking databases! There are very few 
situations where shrinking a database is a good idea. But there are many 
where it is an outright bad idea. 
 
> So, apparently, making the changes, in and of themselves, didn't balloon
> the database. But sometime after that -- probably the following night's 
> optimizations, which included a Shrink Database, caused it to balloon to
> 3 GB. 

Probably it was not until the reindexing that SQL Server allocated full 
space for the new columns.

Judging from the new numbers from sp_spaceused you posted, there is 
plenty of free space in the database. Let it stay that way, it's 
not going to do you any harm. Although the amount of unused space
is possibly a little worrying, since that indicates quite an amount
of fragmentation. But if you stop shrinking your database, the next
reindexing job should take care of that. (Shrinking introduces 
fragmentation, another reason it's bad.)

>> By the way, are you still on SQL 7? I seem to recall that you talked
>> up moving on to SQL 2005, but did that materialise?
> 
> It's still upcoming. Hopefully in the next month or two. You think that 
> might make a difference with this situation?
 
The database is not going to shrink if you upgrade to SQL 2005 if
that is what you think. Let me put it this way: you have recently
become the proud owner of a 3GB database, congratulations!





-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/25/2007 9:22:06 AM

"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A116ABC6CF5CYazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>> I took the last backup from before the database ballooned, and I
>> downloaded it to my development machine, so as to be able to look at the
>> old vs. new data, to see if I missed anything.
>
> Wait, didn't I tell you celebrate Christmas and relax?!?!?

What makes you think I'm not relaxing?.... :-)


>> So, apparently, making the changes, in and of themselves, didn't balloon
>> the database. But sometime after that -- probably the following night's
>> optimizations, which included a Shrink Database, caused it to balloon to
>> 3 GB.
>
> Probably it was not until the reindexing that SQL Server allocated full
> space for the new columns.
>
> Judging from the new numbers from sp_spaceused you posted, there is
> plenty of free space in the database. Let it stay that way, it's
> not going to do you any harm. Although the amount of unused space
> is possibly a little worrying, since that indicates quite an amount
> of fragmentation. But if you stop shrinking your database, the next
> reindexing job should take care of that. (Shrinking introduces
> fragmentation, another reason it's bad.)

I went to run another sp_spaceused, to see what it looked like, since I 
turned shrinking off a couple of days ago. But then I saw that shrinking ran 
last night. Couldn't figure that out, since I had turned it off. Turns out I 
had _two_ shrink jobs running each night! -- one with the regular 
optimizations, and one as a standalone job! Oy! So I turned the second one 
off, and we'll see what happens.

BTW, why do they even have this shrinking thing if it's not needed? I mean, 
I don't doubt you when you say it does more harm than good; but why is it 
there in the first place if it just causes fragmentation?


>
>>> By the way, are you still on SQL 7? I seem to recall that you talked
>>> up moving on to SQL 2005, but did that materialise?
>>
>> It's still upcoming. Hopefully in the next month or two. You think that
>> might make a difference with this situation?
>
> The database is not going to shrink if you upgrade to SQL 2005 if
> that is what you think.

I meant either that: a) perhaps SQL 2005 might have some superior tools for 
dealing with this; and/or b) when the database is converted to SQL 2005 the 
objects might be rewritten in such a way that it would get rid of the 
bloating. But I guess no to (a) or (b).


> Let me put it this way: you have recently
> become the proud owner of a 3GB database, congratulations!

Yay me! :-) OK, that's fine, I can live with that, as long as it's not 
problematic re. performance. But there's something I'm still not 
understanding. Forgive me for being dense, but here goes.

The second sp_spaceused showed:

database_size      unallocated space

----------------------- ------------------ ------------------ 

3320.06 MB         297.91 MB



reserved           data               index_size         unused

------------------ ------------------ ------------------ ------------------ 

3094688 KB         2483672 KB         169712 KB          441304 KB


The data is 2.4 GB, which is 1.4 GB more than the DB was previously (and 
when added to the index_size and unused, accounts for the additional 2 GB). 
Where is this 1.4 GB coming from? I mean, there's only at most 1 GB of 
actual data in the database. So what is the other 1.4 GB? Again, forgive me 
for being dense here if you've already explained it.

Thanks!

Neil


0
Reply nospam52 (1566) 12/26/2007 10:54:38 AM

Neil (nospam@nospam.net) writes:
> BTW, why do they even have this shrinking thing if it's not needed? I
> mean, I don't doubt you when you say it does more harm than good; but
> why is it there in the first place if it just causes fragmentation? 

Good question. Well, for the shrinking facility as such there are of
course situations where it's useful. You have just erased five years of
data, and you expect the business to be a lot calmer for the next year,
so your database is now oversized. Or you took a copy of the production
database to get a development database to play with, but you only want
a fraction of the data, so you delete most and then shrink. This 
becomes even more critical if you create one database per developer.

Now, as for why they put it as an option for maintenance jobs that's a
really good question. Unfortunately, there are plenty of things in 
the tools that violate best practices for the server.

> I meant either that: a) perhaps SQL 2005 might have some superior tools
> for dealing with this; and/or b) when the database is converted to SQL
> 2005 the objects might be rewritten in such a way that it would get rid
> of the bloating. But I guess no to (a) or (b). 

The tools are about the same, but the behaviour may be a little different.

> The data is 2.4 GB, which is 1.4 GB more than the DB was previously (and
> when added to the index_size and unused, accounts for the additional 2
> GB). Where is this 1.4 GB coming from? I mean, there's only at most 1 GB
> of actual data in the database. So what is the other 1.4 GB? Again,
> forgive me for being dense here if you've already explained it. 

The figure of 2.4 GB comes from used pages. If only 100 bytes is actually
used on a page, that counts as 8192 bytes. That is, this is another token
of fragmentation.

You could run DBCC SHOWCONTIG on some big tables to have a look at
fragmentation. But if you reindex every night, and don't shrink anything,
the situation should improve.

By the way, does this query return any rows:

  SELECT * FROM sysindexes WHERE indid = 0

Or put in another way: do you have tables without a clustred index?

-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/26/2007 1:14:01 PM

> You could run DBCC SHOWCONTIG on some big tables to have a look at
> fragmentation. But if you reindex every night, and don't shrink anything,
> the situation should improve.

I ran it on the five largest tables in the db (based on the bytes used, not 
the pages). The Extent Scan Fragmentation for those five tables was: 86%, 
70%, 97%, 98%, 74%.

Another thing that might be a factor here is that I noticed that disk space 
is very low on the drive that contains the database. I have been keeping 4 
weeks worth of backups for this database. When the backup ballooned to 3 gb 
instead of 1 gb, it started eating up more drive space. Last night's backup 
(after, I think, 5 of these 3 gb backups were on the drive) wouldn't go 
through, as there was only 2 gb free on the drive. I deleted some backups, 
and changed the storage to 1 week, instead of 4, and the backup went 
through. Still, I wonder how much the limited disk space is affecting the 
current situation.


> By the way, does this query return any rows:
>
>  SELECT * FROM sysindexes WHERE indid = 0
>
> Or put in another way: do you have tables without a clustred index?

Yes, apparently so. The query returned 51 rows.

Thanks,

Neil 


0
Reply nospam52 (1566) 12/27/2007 12:32:34 PM

Neil (nospam@nospam.net) writes:
>> You could run DBCC SHOWCONTIG on some big tables to have a look at
>> fragmentation. But if you reindex every night, and don't shrink anything,
>> the situation should improve.
> 
> I ran it on the five largest tables in the db (based on the bytes used,
> not the pages). The Extent Scan Fragmentation for those five tables was:
> 86%, 70%, 97%, 98%, 74%. 

That's quite high. What values do you have for Scan Density and Avg. Page 
Density (full)? Or post the full output for one these tables.

> Another thing that might be a factor here is that I noticed that disk
> space is very low on the drive that contains the database. I have been
> keeping 4 weeks worth of backups for this database. When the backup
> ballooned to 3 gb instead of 1 gb, it started eating up more drive
> space. Last night's backup (after, I think, 5 of these 3 gb backups were
> on the drive) wouldn't go through, as there was only 2 gb free on the
> drive. I deleted some backups, and changed the storage to 1 week,
> instead of 4, and the backup went through. Still, I wonder how much the
> limited disk space is affecting the current situation. 

The limited disk space is not going to affect your database, but
obviously the expanded database will affect the available disk
space. Then again, you are not storing the backups on the same drive
are you? (Well, if you get them on tape or some other media as well,
I guess it's OK.)

>> Or put in another way: do you have tables without a clustred index?
> 
> Yes, apparently so. The query returned 51 rows.

And that included your big tables? 

I think we on to something here. Your "optimization job" is in vain;
you cannot defragment a heap (heap = table without a clustered index).

Adding clustered indexes on all tables takes a little thought to
make the best choice for the index. Then again, if you only have
one index on a table, that is probably the best choice.

But you could also just add a clustered index on any column and 
then drop it. That will defragment the table.

While I know a few who disagrees, I think it's best practice to 
have a clustered index on all tables.


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/27/2007 10:26:31 PM

"Erland Sommarskog" <esquel@sommarskog.se> wrote in message 
news:Xns9A13EFC4CFA19Yazorman@127.0.0.1...
> Neil (nospam@nospam.net) writes:
>>> You could run DBCC SHOWCONTIG on some big tables to have a look at
>>> fragmentation. But if you reindex every night, and don't shrink 
>>> anything,
>>> the situation should improve.
>>
>> I ran it on the five largest tables in the db (based on the bytes used,
>> not the pages). The Extent Scan Fragmentation for those five tables was:
>> 86%, 70%, 97%, 98%, 74%.
>
> That's quite high. What values do you have for Scan Density and Avg. Page
> Density (full)? Or post the full output for one these tables.

Here's the full output. Will reply to the rest later (am out the door...). 
Thanks!

DBCC SHOWCONTIG scanning 'CustomerMerges' table...

Table: 'CustomerMerges' (709733731); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 725

- Extents Scanned..............................: 96

- Extent Switches..............................: 95

- Avg. Pages per Extent........................: 7.6

- Scan Density [Best Count:Actual Count].......: 94.79% [91:96]

- Extent Scan Fragmentation ...................: 86.46%

- Avg. Bytes Free per Page.....................: 404.7

- Avg. Page Density (full).....................: 95.00%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'ImageFilesProcessed' table...

Table: 'ImageFilesProcessed' (1992550332); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1695

- Extents Scanned..............................: 214

- Extent Switches..............................: 213

- Avg. Pages per Extent........................: 7.9

- Scan Density [Best Count:Actual Count].......: 99.07% [212:214]

- Extent Scan Fragmentation ...................: 70.09%

- Avg. Bytes Free per Page.....................: 375.4

- Avg. Page Density (full).....................: 95.36%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'CustActivity' table...

Table: 'CustActivity' (1006730739); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 270

- Extents Scanned..............................: 38

- Extent Switches..............................: 37

- Avg. Pages per Extent........................: 7.1

- Scan Density [Best Count:Actual Count].......: 89.47% [34:38]

- Extent Scan Fragmentation ...................: 97.37%

- Avg. Bytes Free per Page.....................: 392.2

- Avg. Page Density (full).....................: 95.15%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'CustomerEvents' table...

Table: 'CustomerEvents' (1029734871); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 320

- Extents Scanned..............................: 46

- Extent Switches..............................: 45

- Avg. Pages per Extent........................: 7.0

- Scan Density [Best Count:Actual Count].......: 86.96% [40:46]

- Extent Scan Fragmentation ...................: 97.83%

- Avg. Bytes Free per Page.....................: 376.9

- Avg. Page Density (full).....................: 95.34%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'ImageFileErrors' table...

Table: 'ImageFileErrors' (69067482); index ID: 0, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1936

- Extents Scanned..............................: 243

- Extent Switches..............................: 242

- Avg. Pages per Extent........................: 8.0

- Scan Density [Best Count:Actual Count].......: 99.59% [242:243]

- Extent Scan Fragmentation ...................: 74.49%

- Avg. Bytes Free per Page.....................: 344.1

- Avg. Page Density (full).....................: 95.75%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.


0
Reply nospam52 (1566) 12/27/2007 11:36:50 PM

Neil (nospam@nospam.net) writes:
> Here's the full output. Will reply to the rest later (am out the door...). 

OK, the numbers looks good, beside the extent scan fragmentation that is.
But it's not that your pages are half-full or something like that.


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/28/2007 8:31:09 AM

>>> Or put in another way: do you have tables without a clustred index?
>>
>> Yes, apparently so. The query returned 51 rows.
>
> And that included your big tables?
>
> I think we on to something here. Your "optimization job" is in vain;
> you cannot defragment a heap (heap = table without a clustered index).
>
> Adding clustered indexes on all tables takes a little thought to
> make the best choice for the index. Then again, if you only have
> one index on a table, that is probably the best choice.
>
> But you could also just add a clustered index on any column and
> then drop it. That will defragment the table.
>
> While I know a few who disagrees, I think it's best practice to
> have a clustered index on all tables.


OK, yes, none of the five tables I previously reported on had clustered 
index. I had created primary keys on those tables, but not clustered 
indexes. For some reason, I think I misunderstood this blurb from BOL:

"PRIMARY KEY constraints create clustered indexes automatically if no 
clustered index already exists on the table and a nonclustered index is not 
specified when you create the PRIMARY KEY constraint."

I think I assumed that the clustered index would automatically be created on 
the pk index.

OK, so I added clustered indexes to those five tables (either on the primary 
key or on an index that I felt approximated the order that the data would be 
used in; or, in the case of child tables, on the foreign key field). I also 
added clustered indexes on about 10 other tables that stood out as tables of 
significant size.

For the five large tables, the before->after on Reserved was as follows:

CustomerMerges: 226661->959
ImageFilesProcessed: 72333->1727
CustActivity: 38034->303
CustomerEvents: 28018->367
ImageFileErrors: 24391->2046

So, some very dramatic changes there!

In terms of fragmentation, also some very dramatic changes. Here is the 
"after" data for those five tables:


DBCC SHOWCONTIG scanning 'CustomerMerges' table...

Table: 'CustomerMerges' (709733731); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 952

- Extents Scanned..............................: 119

- Extent Switches..............................: 118

- Avg. Pages per Extent........................: 8.0

- Scan Density [Best Count:Actual Count].......: 100.00% [119:119]

- Logical Scan Fragmentation ..................: 0.00%

- Extent Scan Fragmentation ...................: 6.72%

- Avg. Bytes Free per Page.....................: 12.3

- Avg. Page Density (full).....................: 99.85%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'ImageFilesProcessed' table...

Table: 'ImageFilesProcessed' (1992550332); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1763

- Extents Scanned..............................: 221

- Extent Switches..............................: 220

- Avg. Pages per Extent........................: 8.0

- Scan Density [Best Count:Actual Count].......: 100.00% [221:221]

- Logical Scan Fragmentation ..................: 0.00%

- Extent Scan Fragmentation ...................: 0.90%

- Avg. Bytes Free per Page.....................: 48.1

- Avg. Page Density (full).....................: 99.41%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'CustActivity' table...

Table: 'CustActivity' (1006730739); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 277

- Extents Scanned..............................: 35

- Extent Switches..............................: 34

- Avg. Pages per Extent........................: 7.9

- Scan Density [Best Count:Actual Count].......: 100.00% [35:35]

- Logical Scan Fragmentation ..................: 0.00%

- Extent Scan Fragmentation ...................: 2.86%

- Avg. Bytes Free per Page.....................: 41.6

- Avg. Page Density (full).....................: 99.49%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'CustomerEvents' table...

Table: 'CustomerEvents' (1029734871); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 337

- Extents Scanned..............................: 43

- Extent Switches..............................: 42

- Avg. Pages per Extent........................: 7.8

- Scan Density [Best Count:Actual Count].......: 100.00% [43:43]

- Logical Scan Fragmentation ..................: 0.00%

- Extent Scan Fragmentation ...................: 6.98%

- Avg. Bytes Free per Page.....................: 38.6

- Avg. Page Density (full).....................: 99.52%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



DBCC SHOWCONTIG scanning 'ImageFileErrors' table...

Table: 'ImageFileErrors' (69067482); index ID: 1, database ID: 7

TABLE level scan performed.

- Pages Scanned................................: 1968

- Extents Scanned..............................: 246

- Extent Switches..............................: 245

- Avg. Pages per Extent........................: 8.0

- Scan Density [Best Count:Actual Count].......: 100.00% [246:246]

- Logical Scan Fragmentation ..................: 2.74%

- Extent Scan Fragmentation ...................: 23.58%

- Avg. Bytes Free per Page.....................: 68.1

- Avg. Page Density (full).....................: 99.16%

DBCC execution completed. If DBCC printed error messages, contact your 
system administrator.



I then ran the "sp_spaceused NULL, true" command again. But the results were 
pretty much the same as they had been previously. I then ran the 
optimization job and then reran the "sp_spaceused NULL, true" command, and 
the results were still pretty much the same.



In any case, there does seem to be much improvement by adding the clustered 
index. So thank you for that!



I was wondering if there's a way to have the query: SELECT * FROM sysindexes 
WHERE indid = 0 list the actual table names instead of the IDs of the tables 
without clustered indexes?


Thanks, Erland!



Neil


0
Reply nospam52 (1566) 12/28/2007 11:46:48 AM

Neil (nospam@nospam.net) writes:
> "PRIMARY KEY constraints create clustered indexes automatically if no 
> clustered index already exists on the table and a nonclustered index is
> not specified when you create the PRIMARY KEY constraint." 
> 
> I think I assumed that the clustered index would automatically be
> created on the pk index. 

That's correct. If you say 

   CREATE TABLE alfons(a int NOT NULL, 
                       b int NOT NULL,
                       c int NOT NULL,
                       CONSTRAINT pk_alfons PRIMARY KEY(a, b, c))

The primary will indeed be clustered. You need to include NONCLUSTERED
to have it nonclustered.

Possibly if you created the tables and keys through some table designer,
you may have gotten non-clustered PKs.

> For the five large tables, the before->after on Reserved was as follows:
> 
> CustomerMerges: 226661->959
> ImageFilesProcessed: 72333->1727
> CustActivity: 38034->303
> CustomerEvents: 28018->367
> ImageFileErrors: 24391->2046
> 
> So, some very dramatic changes there!

Indeed, but...
 
> I then ran the "sp_spaceused NULL, true" command again. But the results
> were pretty much the same as they had been previously. I then ran the 
> optimization job and then reran the "sp_spaceused NULL, true" command,
> and the results were still pretty much the same. 

One would have expected the numbers here to fall as well. 

And there is another thing that is fishy as well. The "Pages Scanned"
in the old output was nowhere close to the numbers for Reserved. Now,
Reserved also includes non-clustered indexes, but I can't see how an
non-clustered index could be that badly fragmented when you run 
defragmentation regularly.

So I am a little uncertain that you really gained that much. Maybe
something is flaky with SQL 7. But if you really gained that much space
by adding clustering indexes - then I would say that you really have a 
case for shrinking. But that would be if sp_spaceused reported over
1GB of space unallocated.


> I was wondering if there's a way to have the query: SELECT * FROM
> sysindexes WHERE indid = 0 list the actual table names instead of the
> IDs of the tables without clustered indexes? 
 
object_name(id). But the name of the "index" is the table name for
indid = 0.


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/28/2007 10:45:41 PM

> Possibly if you created the tables and keys through some table designer,
> you may have gotten non-clustered PKs.

I believe that's what happened. When I originally converted the old MDB file
to SQL Server, DTS created clusted PKs, as well as timestamp fields. The
tables I created since have been created through an Access ADP file, which,
apparently doesn't create clustered PKs (nor timestamp fields).

>> I then ran the "sp_spaceused NULL, true" command again. But the results
>> were pretty much the same as they had been previously. I then ran the
>> optimization job and then reran the "sp_spaceused NULL, true" command,
>> and the results were still pretty much the same.
>
> One would have expected the numbers here to fall as well.
>

There were still 37 tables that didn't have clustered indexes. Most of them 
were tiny lookup tables. Nevertheless, I added clustered indexes to all 
tables, except for seven that were created by sql server than had no indexes 
at all. These were:

sysfiles1

sysindexkeys

sysforeignkeys

sysmembers

sysprotects

sysfiles

Trace_References



After creating all the clustered indexes, I noticed that the tables that 
were taking up the most space were the ones that I was expecting to (the 
previous "top five" were surprises to me that they were at the top of the 
list). So things seem much more how I would have expected them now, in terms 
of which tables are taking up the most amount of space.



Nevertheless, running "sp_spaceused NULL, true" still didn't reveal any 
changes. Also ran optimization and reran it again, but still no change.



So this gets back to my original question. If there is about 1 GB of data in 
the db; and the database is showing 2.5 GB of data; then where's that other 
1.5 GB of data coming from? Fragmentation seems to have been reduced 
greatly. So what is that other 1.5 GB that is being reported? This is very 
strange.


> And there is another thing that is fishy as well. The "Pages Scanned"
> in the old output was nowhere close to the numbers for Reserved. Now,
> Reserved also includes non-clustered indexes, but I can't see how an
> non-clustered index could be that badly fragmented when you run
> defragmentation regularly.
>
> So I am a little uncertain that you really gained that much. Maybe
> something is flaky with SQL 7. But if you really gained that much space
> by adding clustering indexes - then I would say that you really have a
> case for shrinking. But that would be if sp_spaceused reported over
> 1GB of space unallocated.

Yeah, there was still only about 300 MB unallocated. But I ran the shrink 
database anyway. No change with  sp_spaceused.

So, I dont know. This is very strange -- especially the way it just happened 
overnight like that.

Thanks,

Neil 


0
Reply nospam52 (1566) 12/29/2007 6:57:10 AM

Neil (nospam@nospam.net) writes:
> So this gets back to my original question. If there is about 1 GB of
> data in the db; and the database is showing 2.5 GB of data; then where's
> that other 1.5 GB of data coming from? Fragmentation seems to have been
> reduced greatly. So what is that other 1.5 GB that is being reported?
> This is very strange. 
 
I'm afraid that by now I have exhausted all ideas. With the drastic
reduction of CustomerMerges, one would expect to see a big increase in
unallocated space. 226661 pages is 1.8 GB. I suspect that the number
226661 is bogus somehow.

The only thing I can suggest is that you create a new database from scripts,
and the load it with data from the current database. It may not be worth
the hassle though.

-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/29/2007 9:53:31 AM

On 29.12.2007 10:53, Erland Sommarskog wrote:
> Neil (nospam@nospam.net) writes:
>> So this gets back to my original question. If there is about 1 GB of
>> data in the db; and the database is showing 2.5 GB of data; then where's
>> that other 1.5 GB of data coming from? Fragmentation seems to have been
>> reduced greatly. So what is that other 1.5 GB that is being reported?
>> This is very strange. 
>  
> I'm afraid that by now I have exhausted all ideas. With the drastic
> reduction of CustomerMerges, one would expect to see a big increase in
> unallocated space. 226661 pages is 1.8 GB. I suspect that the number
> 226661 is bogus somehow.
> 
> The only thing I can suggest is that you create a new database from scripts,
> and the load it with data from the current database. It may not be worth
> the hassle though.

But if Neil switches SQL Server versions during the process (he is still 
on 7 with two newer versions) it might still be worthwhile (proper 
licenses available of course).  Or would you rather disregard that 
option for compatibility reasons?

Kind regards

	robert
0
Reply shortcutter (5831) 12/29/2007 5:49:34 PM

Robert Klemme (shortcutter@googlemail.com) writes:
> But if Neil switches SQL Server versions during the process (he is still 
> on 7 with two newer versions) it might still be worthwhile (proper 
> licenses available of course).  Or would you rather disregard that 
> option for compatibility reasons?
 
There is some point in doing the migration by rebuild and scripting, 
as there some funky behaviours with migrated databases. However, these
are not very serious. (The only one I recall is that 
sys.dm_os_buffer_descriptors may give bogus values.) So it still 
questionable whether it's worth the hassle. 


Funny enough, though, when we have upgraded our customers from SQL 2000 to 
SQL 2005 we have take this path at most sites. But we have our own special 
reason: we wanted to move away from ANSI_NULLS off, and the only suported 
way to do this is to recreate all tables, as there is an ANSI_NULLS flag 
setting per table. The only way to change it is to update a status bit
in sysobjects, which is not to recommend. (But we still did this at some
sites where the database size precluded a reload.)


-- 
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
0
Reply esquel (7068) 12/29/2007 6:52:34 PM
comp.databases.ms-sqlserver 14536 articles. 3 followers. Post

23 Replies
62 Views

Similar Articles

[PageSpeed] 10


  • Permalink
  • submit to reddit
  • Email
  • Follow


Reply:

Similar Artilces:

Database Size and Table Size calculations
I am using DB2 v8.2 on Red Hat Linux. I have a small issue with the calculations. I run the following to calculate the database size: CALL GET_DBSIZE_INFO(?, ?, ?, 0) To calculate the space used by a table I use the following query: select t.card * (sum(avgcollen) + 10) from syscat.tables t, syscat.syscolumns c where t.tabname=c.tabname and t.tabschema = c.tabschema AND t.TYPE ='T' To calculate the space used by an index on a table I use the following query: select i.nleaf * t.pagesize from syscat.indexes i, syscat.TABLESPACES t, syscat.tables ts where ts.TBSPACEID ...

size of database
I use a MS ACCESS database with the size of 250MB. The dababase consists of 60 tables and no other objects. The objects in this database can not be changed. I wanted to reduce the size of database and deleted about 50% of records in all 60 tables but the size of this database remains 250MB. Could someone tell me why and how to reduce it. Thanks Try: Tools | Database Utilities | Compact -- Allen Browne - Microsoft MVP. Perth, Western Australia. Tips for Access users - http://allenbrowne.com/tips.html Reply to group, rather than allenbrowne at mvps dot org. "john_liu"...

Database Size
Is it much of a problem is an Access database reaches over 1 gig in size? I can easily see the database I designed reaching that figure in a few weeks time. Even with the compacting/repair utility I run every day. The 1 gig limit applies to Access 97 the limit on later versions is 2 gig. However, because your database can include linked tables in other files, its total size is limited only by available storage capacity. I'm guessing that you have some basic design problems. (I have fairly complex apps with 100K+ records that are 10-30 megs in combined file size). Have you split yo...

Size of database
Hi, How can I know the size of the database on a server Postgresql 7.3.4 ? Regards. -- ============================================== | FREDERIC MASSOT | | http://www.juliana-multimedia.com | | mailto:frederic@juliana-multimedia.com | ===========================Debian=GNU/Linux=== Frederic Massot wrote: > Hi, > > How can I know the size of the database on a server Postgresql 7.3.4 ? There is a chapter in the 7.4 docs called "Managing disk space". -- Bruce Momjian | http://candle.pha.pa.us pgman@candl...

Database size
I have a Librarian Pro database that is 115.2MB in size. I have a lot more to add to it, but it is slow, slow, slow. I doubled the size of my iMac to its maximum size of 4 GB. I'm wondering if the problem is that I bought a consumer database when my library is more of a small business size. And if so, what are my options to get my current database working. (Buy something that can import my Librarian Pro database - but which works faster?) Any suggestions? In article <j92746t0hqv0o4jid5qrs7luc2442bfgkn@4ax.com>, Howard Brazee <howard@brazee.net> wrote: > I have a...

Database size
How would I get the total size of a mysql database in MB format? -- Sharif T. Karim ....you don't know wrath yet... On Mon, 19 Jul 2004 22:49:53 GMT, "Sharif T. Karim" <sharif@nyc.rr.com> wrote: >How would I get the total size of a mysql database in MB format? Log on to the database server, find the data directory, and use "du -hs" on the directory. -- Andy Hassall <andy@andyh.co.uk> / Space: disk usage analysis tool http://www.andyh.co.uk / http://www.andyhsoftware.co.uk/space "Sharif T. Karim" wrote: > How would I get the...

Database size
Dear all, I would like to ask is there any DB2 SQL statement that can be used to show the overall database size? (i.e. display the allocated/used data files or tablespaces (either SMS or DMS) size??) Thanks! Henry I don't know of any SQL that will do that but the following command will: list tablespace containers for n show detail where n is the tablespace number (TSID) for the tablespace you are interested in viewing. From my RHEL4 UDB 8.1 FP9 system after connecting to database SAMPLE: [.....]$ db2 list tablespace containers for 2 show detail Ta...

size of the database
Hi, How can i estimate the approximate size of the database as a whole. This is required because i want to take the backup of the database and have to make sure that that much space is available. Thanks Rahul On Sep 10, 3:07 pm, Rahul Babbar <rahul.babb...@gmail.com> wrote: > Hi, > > How can i estimate the approximate size of the database as a whole. > This is required because i want to take the backup of the database and > have to make sure that that much space is available. > > Thanks > > Rahul use db2 CALL GET_DBSIZE_INFO(?, ?, ?...

database size
is there a way to find the size of a database(s)? not by estimating tables or calculating something but more straight-forward like through sql on sys tables, etc? thanks in advance! That is a meaningless question that is often asked. What do you mean by "size"? What are you trying to find out? How much space is being used? How many records there are? How many bytes of actual data are stored? How about indexes? How about the data dictionary? etc. etc etc... "tomL" <schmotom@yahoo.com> wrote in message news:e70357eb.0310101111.2b978fd6@posting.google.com...

Size of database
Hi, How can I know the size of the database on a server Postgresql 7.3.4 ? Regards. -- ============================================== | FREDERIC MASSOT | | http://www.juliana-multimedia.com | | mailto:frederic@juliana-multimedia.com | ===========================Debian=GNU/Linux=== Frederic Massot wrote: > Hi, > > How can I know the size of the database on a server Postgresql 7.3.4 ? There is a chapter in the 7.4 docs called "Managing disk space". -- Bruce Momjian | http://candle.pha.pa.us pgman@candl...

Recommended database size?
Hello, I'm looking at switching to FileMaker from Access since it's looking like I'm not going to be administrating things much longer and I need a good, easy-to-use DB for my replacement. I read a review of FileMaker on PC World's website and one of the drawbacks mentioned was that FileMaker's probably not the best choice for large DBs. For this project we've got 7-8 tables. Most of the tables only have several hundred records in them, but one of them will likely be close to 100000 records (with probably 15 or so fields) by the time the project is finished. Is thi...

Database size question
Our database size is currently 4G and is incrementing at a rate of 45M/day. What is the max size of a SQL database? And what is the size beyond which the server performance will start to go down? dchow wrote: >Our database size is currently 4G and is incrementing at a rate of >45M/day. What is the max size of a SQL database? And what is the >size beyond which the server performance will start to go down? > > > On what hardware? From where I sit you've probably already passed that point you just don't know it. But until end-users start complaining I'd ...

Oracle Database Sizing
Hi all, I am not very knowledgable in this area so would appreciate any advice you guys can give. I need to provide a projected estimate of Oracle db tables. Essentially my problem is in trying to work out the size of one single row. I have a table which uses the varchar2, integer and date datatypes but I cannot seem to find anywhere how many storage bytes each of these use (with the exception of varchar). As far as I understand for integers and numbers it depends on the actual number entered - is this correct? If so - is there anyway I can estimate this based on the data I know will be ent...

How to know the size of database...
Hi All!! If we have the size of cold backup & export backup in 1024 blocks, can we calculate or find out the size of database... Regards, Ashutosh What is the database ? db2 ? oracle ? the backup should be pretty much the size of the db... If its db2 then, "db2 list tablespaces show detail" to get page size and total pages for each tablespace, and add ( total pages * page size ) to get a total. HTH Mark Taylor ...

table/database size
dear respected gurus, i have been working several years in access, but the size of db reaches till 250MB max. now we have one very big projects where we will have hundred of thousdands of records. so i had read a lot on this group in access 97 the limit of tables/db size was 1Gb, it increase to acc2k 2Gb , then same for acc2002, then again same for access 2003 so since long time version releaseing but Microsoft has not increase the limits.as i had check access specifications also on network enviornment, if i split apps in access 2 mdb,one as backend one as front. can i utulize ms sql serve...

Database size optimization?
I have a database in which I have 2 tables. One table is Employees, other is Events. I have only one field in Employees and it is EmployeeName (text 50 chars). Events has fields AutoNumber, Start Date/Time, End Date/Time, EmployeeName. I have relationships EmployeeName from Employees 1 to many for Events. My question is: Does every record in Events table take space for EmployeeName or is it coded in database in another way so it doesnt take up size of EmployeeName? How would I have to organize my database so it would use the least space per record? Thanx Mickey previously wrote: > ...

MSMWS Database size
HI, when MSMWS was introduced by Micronetics, there was a limit on databasesize about 20MB max. Does anybody know, if this limit still exists with MSMWS2? TX A.v.Harten Armin van Harten asked: > when MSMWS was introduced by Micronetics, there was a limit > on databasesize > about 20MB max. Does anybody know, if this limit still exists > with MSMWS2? AFAIK it can create/mount/access d/bs of up to 16GB each. That's what the User Guide claims, and I just created one with 512000 blocks (1K each). HTH, John Murray ...

Database Size Estimation
Hi, I have been asked to do the database size estimation. I have the the list of tables, indexes and average number of rows expected at this point and also for the next 5 years...With that how can i estimate the database size. This the first time i am exposed to do such thing. Any help is highly appreciated. I am hoping to get a tool or some excel sheet where i key in the respective information to get the sizing !! is something possible like that ?? Rgds "Vijays" <srinivasanvijay_2000@yahoo.com> schrieb im Newsbeitrag news:fe3d44d1.0311140052.54dc0530@posting.google.com... ...

Odd Database Size
Hi, I have experience of strange behavior of my Access 97 database. I went to the existing database, which has a size of 30mb, and change three lines of VBA codes behind reports. 15 reports have been modified. And after compacting database I got 33.5mb database. What did I change that caused database jump on 3.5 megabytes of size? What can I do to clean up that absolutely odd behavior of Access 97? Thanks in advance, Alex Hi Alex, use the DECOMPILE command for decreasing the db size. Check this on Advisor.com site. *** Sent via Developersdex http://www.developersdex.com *** Don't jus...

Database size increase
Hi there, Is there any reason why my db would almost double in size after I copied everything to a new db after my old one became corrupted? Is it anything to worry about? Thanks - David In the new database: 1. 1. Uncheck the boxes under: Tools | Options | General | Name AutoCorrect Explanation of why: http://allenbrowne.com/bug-03.html 2. Compact: Tools | Database Utilties | Compact. 3. Then decomile. Enter something like this at the command prompt while Access is not running. It is all one line, and include the quotes: "c:\Program Files\Microsoft office\office\msacce...

Database Size Limit
Tried to find any reference to this on the Microsoft help pages but can't find any reference - maybe I'm just not smart enough to find it. However; I have been working on a membership database where I "hold" a photograph in a table. All is working as would expect so I won't bore you with the details - it works almost identically to the Northwind sample database "employee" table. My problem that I get an error when I have about 30 photographs attached - "database has exceeded size limits" or something like that. I know the database is getting b...

Decrease database size
I have a large database currently using up 30GB of space -- problem is, I am running out of space on my harddrive and so have to do some cleanup. One of the tables has a blob field and I find that I can safely delete some of this blob data because I will not be needing it in the database anymore. So the question is, if I delete some of the blob data from this table will the database size decrease? Is there anything else I can do to reduce the size of the database? Thank you so very much for any help with this! Sue: > > So the question is, if I delete some of the blob d...

Size of a DB2 database
Hi all, Is there any way to determine the size of a DB2 database without using list tablespaces? I am working on db2 udb 8.2 fp12 on LUW Thanks Asphalt Blazer wrote: > Hi all, > Is there any way to determine the size of a DB2 database without > using list tablespaces? I am working on db2 udb 8.2 fp12 on LUW > > Thanks You can use GET_DBSIZE_INFO procedure. Check this link : http://tinyurl.com/95k8a ...

Regarding database Size
Hi, on one of my dev server is giving following error when i am trying to get the db size > db2 'call get_dbsize_info(?,?,?,1)' SQL0443N Routine "*IZE_INFO" (specific name "") has returned an error SQLSTATE with diagnostic text "SQL0444 Reason code or token: *TAMPDIFF, SQL020422182517115". SQLSTATE=38553 while the above command is working fine on PROD. As a background, i just restored the backup image of PROD on DEV and created a new db with prod image. I used following 3 commands/steps to do it (might be helpful):- (1)db2 -v restor...