f



Convert SAP Oracle Database to IBM DB2 Database??

Hello,

I would like to hear from anyone here who has converted their SAP Oracle 
database to IBM DB2 database?

Did you realize greater disk saving via DB2 compression?

Did you run the latest version of DB2 rather than allow SAP to keep your 
Oracle version back-leveled?

Please, DB2 Bashers need not apply, looking for the business case anyone 
used to convert to DB2.

Thank you.

Charles


0
2/6/2008 1:52:09 AM
comp.databases.oracle.server 22978 articles. 1 followers. Post Follow

59 Replies
1297 Views

Similar Articles

[PageSpeed] 23

On 6 Feb, 01:52, "Charles Davis" <cdavis10...@comcast.net> wrote:
> Hello,
>
> I would like to hear from anyone here who has converted their SAP Oracle
> database to IBM DB2 database?
>
> Did you realize greater disk saving via DB2 compression?
>
> Did you run the latest version of DB2 rather than allow SAP to keep your
> Oracle version back-leveled?
>
> Please, DB2 Bashers need not apply, looking for the business case anyone
> used to convert to DB2.
>
> Thank you.
>
> Charles

I'm no DB2 basher, however, from you post you seem to be insinuating
that disk space is the driving force for this conversion.  Is this the
case?

HTH

-g
0
gareth2106 (865)
2/6/2008 1:13:53 PM
Charles Davis wrote:
> I would like to hear from anyone here who has converted their SAP Oracle 
> database to IBM DB2 database?
Check your in-box at the email address you use here.
I have passed you a contact that I consider as close to non-partisan as 
I could think of (and more importantly, I don't know what she'll say 
over a beer).

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/6/2008 1:26:04 PM
Charles Davis wrote:
> Hello,
> 
> I would like to hear from anyone here who has converted their SAP Oracle 
> database to IBM DB2 database?
> 
> Did you realize greater disk saving via DB2 compression?
> 
> Did you run the latest version of DB2 rather than allow SAP to keep your 
> Oracle version back-leveled?
> 
> Please, DB2 Bashers need not apply, looking for the business case anyone 
> used to convert to DB2.
> 
> Thank you.
> 
> Charles

Version information is relevant.

My testing of Oracle compression versus DB2 compression indicates that
DB2 has some catching up to do.

But as gazzag indicates ... surely you are not considering a
multimillion dollar implementation change solely on the basis of a
few TB of disk. If that's the case I've no doubt someone at Oracle
will gladly buy you the disk you need.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/6/2008 4:56:36 PM
On Feb 6, 2:52=A0am, "Charles Davis" <cdavis10...@comcast.net> wrote:
> Hello,
>
> I would like to hear from anyone here who has converted their SAP Oracle
> database to IBM DB2 database?
>
> Did you realize greater disk saving via DB2 compression?
>
> Did you run the latest version of DB2 rather than allow SAP to keep your
> Oracle version back-leveled?
>
> Please, DB2 Bashers need not apply, looking for the business case anyone
> used to convert to DB2.
>
> Thank you.
>
> Charles

Never done it myself, but you may find this whitepaper of interest:
ftp://ftp.software.ibm.com/software/data/pubs/papers/DB2-SAP-compression.pdf=


HTH

--
Jeroen
0
nltaal (56)
2/6/2008 5:29:13 PM
On Feb 6, 5:56=A0pm, DA Morgan <damor...@psoug.org> wrote:
> Charles Davis wrote:
> > Hello,
>
> > I would like to hear from anyone here who has converted their SAP Oracle=

> > database to IBM DB2 database?
>
> > Did you realize greater disk saving via DB2 compression?
>
> > Did you run the latest version of DB2 rather than allow SAP to keep your=

> > Oracle version back-leveled?
>
> > Please, DB2 Bashers need not apply, looking for the business case anyone=

> > used to convert to DB2.
>
> > Thank you.
>
> > Charles
>
> Version information is relevant.
>
> My testing of Oracle compression versus DB2 compression indicates that
> DB2 has some catching up to do.

Hmm, like you said yourself: version (and platform!) information is
relevant. What's your test environment for DB2? (assuming 11g for
Oracle, probably on Linux?)

>
> But as gazzag indicates ... surely you are not considering a
> multimillion dollar implementation change solely on the basis of a
> few TB of disk. If that's the case I've no doubt someone at Oracle
> will gladly buy you the disk you need.

Compression is not only to do with disk space reduction, it can also
effect the performance of both online transactions and batch
processing.

--
Jeroen
0
nltaal (56)
2/6/2008 5:55:21 PM
DA Morgan wrote:
> Charles Davis wrote:
>> Hello,
>>
>> I would like to hear from anyone here who has converted their SAP 
>> Oracle database to IBM DB2 database?
>>
>> Did you realize greater disk saving via DB2 compression?
>>
>> Did you run the latest version of DB2 rather than allow SAP to keep 
>> your Oracle version back-leveled?
>>
>> Please, DB2 Bashers need not apply, looking for the business case 
>> anyone used to convert to DB2.
>>
>> Thank you.
>>
>> Charles
> 
> Version information is relevant.
> 
> My testing of Oracle compression versus DB2 compression indicates that
> DB2 has some catching up to do.
Interesting, care to elaborate? Always thriving to improve...
Given Oracle's DeWitt clause I'll gladly accept any information you have 
by email as to not get you into trouble.

Cheers
Serge

-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/6/2008 6:15:03 PM
On Feb 7, 4:29 am, Jeroen van den Broek
<nlt...@baasbovenbaas.demon.nl> wrote:

> Never done it myself, but you may find this whitepaper of interest:ftp://ftp.software.ibm.com/software/data/pubs/papers/DB2-SAP-compress...
>

you know what is amazing in this paper?
How it shows improvements in CPU in the application server
when the db server gets compressed rows!
Must be an amzing "feature", this compression that
can change the performance in a different system...
0
wizofoz2k (1386)
2/7/2008 1:46:15 AM
Noons wrote:
> you know what is amazing in this paper?
> How it shows improvements in CPU in the application server
> when the db server gets compressed rows!
> Must be an amzing "feature", this compression that
> can change the performance in a different system...
What are you referring to? Care to share a page number?

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/7/2008 5:36:44 PM
Jeroen van den Broek wrote:
> On Feb 6, 5:56 pm, DA Morgan <damor...@psoug.org> wrote:
>> Charles Davis wrote:
>>> Hello,
>>> I would like to hear from anyone here who has converted their SAP Oracle
>>> database to IBM DB2 database?
>>> Did you realize greater disk saving via DB2 compression?
>>> Did you run the latest version of DB2 rather than allow SAP to keep your
>>> Oracle version back-leveled?
>>> Please, DB2 Bashers need not apply, looking for the business case anyone
>>> used to convert to DB2.
>>> Thank you.
>>> Charles
>> Version information is relevant.
>>
>> My testing of Oracle compression versus DB2 compression indicates that
>> DB2 has some catching up to do.
> 
> Hmm, like you said yourself: version (and platform!) information is
> relevant. What's your test environment for DB2? (assuming 11g for
> Oracle, probably on Linux?)

9 on RHEL 4

>> But as gazzag indicates ... surely you are not considering a
>> multimillion dollar implementation change solely on the basis of a
>> few TB of disk. If that's the case I've no doubt someone at Oracle
>> will gladly buy you the disk you need.
> 
> Compression is not only to do with disk space reduction, it can also
> effect the performance of both online transactions and batch
> processing.

Again: If that is the only problem you are about to waste a minimum
of several million dollars.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/7/2008 5:41:46 PM
Serge Rielau wrote:
> DA Morgan wrote:
>> Charles Davis wrote:
>>> Hello,
>>>
>>> I would like to hear from anyone here who has converted their SAP 
>>> Oracle database to IBM DB2 database?
>>>
>>> Did you realize greater disk saving via DB2 compression?
>>>
>>> Did you run the latest version of DB2 rather than allow SAP to keep 
>>> your Oracle version back-leveled?
>>>
>>> Please, DB2 Bashers need not apply, looking for the business case 
>>> anyone used to convert to DB2.
>>>
>>> Thank you.
>>>
>>> Charles
>>
>> Version information is relevant.
>>
>> My testing of Oracle compression versus DB2 compression indicates that
>> DB2 has some catching up to do.
> Interesting, care to elaborate? Always thriving to improve...
> Given Oracle's DeWitt clause I'll gladly accept any information you have 
> by email as to not get you into trouble.
> 
> Cheers
> Serge

Going to have to disappoint you this time for reasons I am sure you can
appreciate.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/7/2008 5:43:04 PM
On Feb 6, 5:46=A0pm, Noons <wizofo...@yahoo.com.au> wrote:
> On Feb 7, 4:29 am, Jeroen van den Broek
>
> <nlt...@baasbovenbaas.demon.nl> wrote:
> > Never done it myself, but you may find this whitepaper of interest:ftp:/=
/ftp.software.ibm.com/software/data/pubs/papers/DB2-SAP-compress...
>
> you know what is amazing in this paper?
> How it shows improvements in CPU in the application server
> when the db server gets compressed rows!
> Must be an amzing "feature", this compression that
> can change the performance in a different system...

I haven't bothered to read the paper, but if you think about it,
changing the performance of a system can affect the performance of
another system.  Thought experiment:  app server spends lots of cycles
processing data coming through the network, starving the cycles needed
to process local processes.  Slow down the data coming through the
network by making the db server slower processing decompression from
db, more cycles available to process local work.

Of course, if there is sufficient power in an app server, feeding it
more data from the db server will show up as better performance too.
That ought to be obvious.

[Glances at paper]  I only see one system in the paper, am I missing
something?  Also looks like they didn't really push the envelope to a
point where compression bottlenecks really would show up
("...investigation should go deeper in future." - well, duh), though
maybe I didn't look closely enough - if they had started with cpu
usage close to the edge, things certainly would have gone south fast.
As shown, I would guess for full-table-scan type batch jobs the
decrease in scan time is much greater than any increase in cpu time -
which seems to be the point.

Since most systems I've seen are eventually pushed to their limits, I
think this sort of trade-off can bring the day of necessary hardware
upgrade closer.  But maybe the DB2 world is different.

jg
--
@home.com is bogus.
Yes, you too can play with Alan White and Nick Mason:
http://www.undercover.com.au/News-Story.aspx?id=3D3692
0
joel-garry (4553)
2/7/2008 6:06:58 PM
DA Morgan wrote:
> Serge Rielau wrote:
>> DA Morgan wrote:
>>> My testing of Oracle compression versus DB2 compression indicates that
>>> DB2 has some catching up to do.
>> Interesting, care to elaborate? Always thriving to improve...
>> Given Oracle's DeWitt clause I'll gladly accept any information you 
>> have by email as to not get you into trouble.
> Going to have to disappoint you this time for reasons I am sure you can
> appreciate.
So an unsubstantiated claim... Aside you should use DB2 9.5 to compare 
to Oracle 11

-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/7/2008 9:10:47 PM
joel garry wrote:
> Since most systems I've seen are eventually pushed to their limits, I
> think this sort of trade-off can bring the day of necessary hardware
> upgrade closer.  But maybe the DB2 world is different.
The design point is that most systems are I/O and/or main memory bound 
and have have some headroom on CPU. If the system is CPU bound then 
performance is less likely to improve.

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/7/2008 9:23:39 PM
Serge Rielau wrote:
> DA Morgan wrote:
>> Serge Rielau wrote:
>>> DA Morgan wrote:
>>>> My testing of Oracle compression versus DB2 compression indicates that
>>>> DB2 has some catching up to do.
>>> Interesting, care to elaborate? Always thriving to improve...
>>> Given Oracle's DeWitt clause I'll gladly accept any information you 
>>> have by email as to not get you into trouble.
>> Going to have to disappoint you this time for reasons I am sure you can
>> appreciate.
> So an unsubstantiated claim... Aside you should use DB2 9.5 to compare 
> to Oracle 11

If I'd used Oracle 11 I would have.

But as long as you are actively participating how about a brief
technical presentation from the expert explaining to us how compression
in DB2 differs from that in Oracle 10g and 11g.

Please include also information as to how headers are designed and how
blocks are filled? Top down as in Informix for both Top-Down and
Bottom-Up as in Oracle? Or some other means.

Thanks.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/7/2008 11:26:31 PM
Serge Rielau wrote:
> joel garry wrote:
>> Since most systems I've seen are eventually pushed to their limits, I
>> think this sort of trade-off can bring the day of necessary hardware
>> upgrade closer.  But maybe the DB2 world is different.
> The design point is that most systems are I/O and/or main memory bound 
> and have have some headroom on CPU. If the system is CPU bound then 
> performance is less likely to improve.
> 
> Cheers
> Serge

Though the time required to read 100 blocks is still going to be less
than the time required to read 300-400 blocks. So while you wouldn't
want to add additional CPU overhead you might still see a performance
improvement.

PS: You ought to measure the CPU required in Oracle 11g. A hint ...
it is very very small. Not like what some other vendors have done. <g>
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/7/2008 11:28:26 PM
Serge Rielau wrote:

> > you know what is amazing in this paper?
> > How it shows improvements in CPU in the application server
> > when the db server gets compressed rows!
> > Must be an amzing "feature", this compression that
> > can change the performance in a different system...

> What are you referring to? Care to share a page number?

What's the matter, can't you read basic English?

What I'm talking about is that paper making
false claims about db CPU use reduction by measuring
application dialogue CPU use.

SAP is a multi-tier application-server based system.
If you change the CPU usage in the database it works against,
you change NOTHING in the CPU usage of the application
server in which SAP is being executed.  Note that I am talking
about CPU Usage, the term used uin that paper. NOT overall
wall clock time or duration.

The paper contains multiple examples near the end
of " dialogue test cases"  in which the " CPU usage"  by
SAP was reduced.

Yeah, I'm sure this was caused by DB2's  compression.
And the pope is not a Catholic.

Talk about NO CLUE, from whoever lousy marketeer
pushed this one out...
0
wizofoz2k (1386)
2/8/2008 2:05:54 AM
DA Morgan wrote:
> Serge Rielau wrote:
>> DA Morgan wrote:
>>> Serge Rielau wrote:
>>>> DA Morgan wrote:
>>>>> My testing of Oracle compression versus DB2 compression indicates that
>>>>> DB2 has some catching up to do.
>>>> Interesting, care to elaborate? Always thriving to improve...
>>>> Given Oracle's DeWitt clause I'll gladly accept any information you 
>>>> have by email as to not get you into trouble.
>>> Going to have to disappoint you this time for reasons I am sure you can
>>> appreciate.
>> So an unsubstantiated claim... Aside you should use DB2 9.5 to compare 
>> to Oracle 11
> 
> If I'd used Oracle 11 I would have.
> 
> But as long as you are actively participating how about a brief
> technical presentation from the expert explaining to us how compression
> in DB2 differs from that in Oracle 10g and 11g.
> 
> Please include also information as to how headers are designed and how
> blocks are filled? Top down as in Informix for both Top-Down and
> Bottom-Up as in Oracle? Or some other means.
http://blogs.ittoolbox.com/database/technology/archives/compression-in-11g-vs-db2-9-21173
Maybe we can also take the discussion there....

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/8/2008 2:32:24 AM
DA Morgan wrote:
> Serge Rielau wrote:
>> joel garry wrote:
>>> Since most systems I've seen are eventually pushed to their limits, I
>>> think this sort of trade-off can bring the day of necessary hardware
>>> upgrade closer.  But maybe the DB2 world is different.
>> The design point is that most systems are I/O and/or main memory bound 
>> and have have some headroom on CPU. If the system is CPU bound then 
>> performance is less likely to improve.
>>
>> Cheers
>> Serge
> 
> Though the time required to read 100 blocks is still going to be less
> than the time required to read 300-400 blocks. So while you wouldn't
> want to add additional CPU overhead you might still see a performance
> improvement.
> 
> PS: You ought to measure the CPU required in Oracle 11g. A hint ...
> it is very very small. Not like what some other vendors have done. <g>
Apparently Oracle assumed CPU bound systems.. we all make our choices.
Time will tell.

Cheers
Serge

-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/8/2008 2:35:44 AM
Noons wrote:
> Serge Rielau wrote:
> 
>>> you know what is amazing in this paper?
>>> How it shows improvements in CPU in the application server
>>> when the db server gets compressed rows!
>>> Must be an amzing "feature", this compression that
>>> can change the performance in a different system...
> 
>> What are you referring to? Care to share a page number?
> 
> What's the matter, can't you read basic English?
I apparently have difficulty reading things into a text..

> What I'm talking about is that paper making
> false claims about db CPU use reduction by measuring
> application dialogue CPU use.
It does? Where does it say that it is measuring application dialog CPU use?
It's measuring three values:
CPU time
  Which consistently is SERVER.
  While that is not spelled out explicitly all of section 6.3 is pretty
  clear about it.
Response Time
  That's bumper to bumper including the app
DB Time
   Wall clock time consumed on the server
   (I hope you take no issue with CPU time > Wall clock time)

> 
> SAP is a multi-tier application-server based system.
> If you change the CPU usage in the database it works against,
> you change NOTHING in the CPU usage of the application
> server in which SAP is being executed.  Note that I am talking
> about CPU Usage, the term used in that paper. NOT overall
> wall clock time or duration.
I agree. But what substantiates your claim that this is what it says?
It is unfortunate that the paper assumed a reader who doesn't actually 
try to misunderstand the obvious definition for the purpose of 
discrediting it.

> The paper contains multiple examples near the end
> of " dialogue test cases"  in which the " CPU usage"  by
> SAP was reduced.
By CPU Usage SAP? Where does it say that? CPU Usage has not been redefined.
It merely correlates the data previously collected on the server to 
specific dialogs.
Please, perhaps my English is really bad.. point to your source so I can 
learn. Don't leave me ignorant.

> Yeah, I'm sure this was caused by DB2's  compression.
> And the pope is not a Catholic.
Teh server side CPU changes, yes. No claims made on the app.

> Talk about NO CLUE, from whoever lousy marketeer
> pushed this one out...
I took the liberty of checking out the author.
Just as the keyword "PRODUCTION" in the title might suggest he is indeed 
a DBA.
This is "his" system. Note that he is looking at long term results, not 
some 3 hours run. The graph itself covers a month.

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/8/2008 3:04:50 AM
Serge Rielau wrote:
> DA Morgan wrote:
>> Serge Rielau wrote:
>>> joel garry wrote:
>>>> Since most systems I've seen are eventually pushed to their limits, I
>>>> think this sort of trade-off can bring the day of necessary hardware
>>>> upgrade closer.  But maybe the DB2 world is different.
>>> The design point is that most systems are I/O and/or main memory 
>>> bound and have have some headroom on CPU. If the system is CPU bound 
>>> then performance is less likely to improve.
>>>
>>> Cheers
>>> Serge
>>
>> Though the time required to read 100 blocks is still going to be less
>> than the time required to read 300-400 blocks. So while you wouldn't
>> want to add additional CPU overhead you might still see a performance
>> improvement.
>>
>> PS: You ought to measure the CPU required in Oracle 11g. A hint ...
>> it is very very small. Not like what some other vendors have done. <g>
> Apparently Oracle assumed CPU bound systems.. we all make our choices.
> Time will tell.
> 
> Cheers
> Serge

Actually I think Oracle assumes storage costs money.
Look up Pillar Data Systems
Funded by Lawrence Investments
Only one guess on Lawrence's last name.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/8/2008 3:07:53 AM
Serge Rielau wrote:
> DA Morgan wrote:
>> Serge Rielau wrote:
>>> DA Morgan wrote:
>>>> Serge Rielau wrote:
>>>>> DA Morgan wrote:
>>>>>> My testing of Oracle compression versus DB2 compression indicates 
>>>>>> that
>>>>>> DB2 has some catching up to do.
>>>>> Interesting, care to elaborate? Always thriving to improve...
>>>>> Given Oracle's DeWitt clause I'll gladly accept any information you 
>>>>> have by email as to not get you into trouble.
>>>> Going to have to disappoint you this time for reasons I am sure you can
>>>> appreciate.
>>> So an unsubstantiated claim... Aside you should use DB2 9.5 to 
>>> compare to Oracle 11
>>
>> If I'd used Oracle 11 I would have.
>>
>> But as long as you are actively participating how about a brief
>> technical presentation from the expert explaining to us how compression
>> in DB2 differs from that in Oracle 10g and 11g.
>>
>> Please include also information as to how headers are designed and how
>> blocks are filled? Top down as in Informix for both Top-Down and
>> Bottom-Up as in Oracle? Or some other means.
> http://blogs.ittoolbox.com/database/technology/archives/compression-in-11g-vs-db2-9-21173 
> 
> Maybe we can also take the discussion there....
> 
> Cheers
> Serge

I would but there is a rather substantial problem: Chris Eaton's
understanding of Oracle compression appears to not even include
the marketing materials much less the rather substantial discussion
of the concepts and architecture.

What I'm saying is that it is not just superficial ... from a technical 
point-of-view it is mostly incorrect. He needs to learn how to read the
on-line docs.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/8/2008 3:11:15 AM
DA Morgan wrote:
> Serge Rielau wrote:
>> DA Morgan wrote:
>>> Serge Rielau wrote:
>>>> DA Morgan wrote:
>>>>> Serge Rielau wrote:
>>>>>> DA Morgan wrote:
>>>>>>> My testing of Oracle compression versus DB2 compression indicates 
>>>>>>> that
>>>>>>> DB2 has some catching up to do.
>>>>>> Interesting, care to elaborate? Always thriving to improve...
>>>>>> Given Oracle's DeWitt clause I'll gladly accept any information 
>>>>>> you have by email as to not get you into trouble.
>>>>> Going to have to disappoint you this time for reasons I am sure you 
>>>>> can
>>>>> appreciate.
>>>> So an unsubstantiated claim... Aside you should use DB2 9.5 to 
>>>> compare to Oracle 11
>>>
>>> If I'd used Oracle 11 I would have.
>>>
>>> But as long as you are actively participating how about a brief
>>> technical presentation from the expert explaining to us how compression
>>> in DB2 differs from that in Oracle 10g and 11g.
>>>
>>> Please include also information as to how headers are designed and how
>>> blocks are filled? Top down as in Informix for both Top-Down and
>>> Bottom-Up as in Oracle? Or some other means.
>> http://blogs.ittoolbox.com/database/technology/archives/compression-in-11g-vs-db2-9-21173 
>>
>> Maybe we can also take the discussion there....
>>
>> Cheers
>> Serge
> 
> I would but there is a rather substantial problem: Chris Eaton's
> understanding of Oracle compression appears to not even include
> the marketing materials much less the rather substantial discussion
> of the concepts and architecture.
> 
> What I'm saying is that it is not just superficial ... from a technical 
> point-of-view it is mostly incorrect. He needs to learn how to read the
> on-line docs.
Well the nice thing about BLOGs is the ability to give feedback.
You are a teacher, no? Teach!

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/8/2008 3:24:01 AM
>>
>> What I'm saying is that it is not just superficial ... from a 
>> technical point-of-view it is mostly incorrect. He needs to learn how 
>> to read the
>> on-line docs.
> Well the nice thing about BLOGs is the ability to give feedback.
> You are a teacher, no? Teach!
> 
> Cheers
> Serge

Eaton's blog is basically correct. However he mentions the negatives of 
the Oracle design, and does not mention the advantages. The greatest 
advantage of having the symbol table local to the page is that you do 
not have to do another I/O to read the symbol table - the single block 
I/O gets the compressed data and the symbols required to uncompress it. 
Our testing in large scale environments shows that to me more beneficial 
than the IBM design. There are also additional benefits accrued from not 
having to expand a single symbol table when the data values grow, etc.

So it's horses for courses. We just think our horses are better. YMMV, 
and as always, test well.
0
2/8/2008 5:45:05 AM
Serge Rielau wrote:
> DA Morgan wrote:
>> Serge Rielau wrote:
>>> DA Morgan wrote:
>>>> Serge Rielau wrote:
>>>>> DA Morgan wrote:
>>>>>> Serge Rielau wrote:
>>>>>>> DA Morgan wrote:
>>>>>>>> My testing of Oracle compression versus DB2 compression 
>>>>>>>> indicates that
>>>>>>>> DB2 has some catching up to do.
>>>>>>> Interesting, care to elaborate? Always thriving to improve...
>>>>>>> Given Oracle's DeWitt clause I'll gladly accept any information 
>>>>>>> you have by email as to not get you into trouble.
>>>>>> Going to have to disappoint you this time for reasons I am sure 
>>>>>> you can
>>>>>> appreciate.
>>>>> So an unsubstantiated claim... Aside you should use DB2 9.5 to 
>>>>> compare to Oracle 11
>>>>
>>>> If I'd used Oracle 11 I would have.
>>>>
>>>> But as long as you are actively participating how about a brief
>>>> technical presentation from the expert explaining to us how compression
>>>> in DB2 differs from that in Oracle 10g and 11g.
>>>>
>>>> Please include also information as to how headers are designed and how
>>>> blocks are filled? Top down as in Informix for both Top-Down and
>>>> Bottom-Up as in Oracle? Or some other means.
>>> http://blogs.ittoolbox.com/database/technology/archives/compression-in-11g-vs-db2-9-21173 
>>>
>>> Maybe we can also take the discussion there....
>>>
>>> Cheers
>>> Serge
>>
>> I would but there is a rather substantial problem: Chris Eaton's
>> understanding of Oracle compression appears to not even include
>> the marketing materials much less the rather substantial discussion
>> of the concepts and architecture.
>>
>> What I'm saying is that it is not just superficial ... from a 
>> technical point-of-view it is mostly incorrect. He needs to learn how 
>> to read the
>> on-line docs.
> Well the nice thing about BLOGs is the ability to give feedback.
> You are a teacher, no? Teach!
> 
> Cheers
> Serge

Teach someone from IBM? I wouldn't presume to do such a thing.

But here's something for those might want to consider the value
of that blog entry.

CREATE TABLE t1 (
testcol VARCHAR2(50))
TABLESPACE uwdata;

CREATE TABLE t2 (
testcol VARCHAR2(50))
TABLESPACE uwdata COMPRESS;

DECLARE
  x t1.testcol%TYPE;
BEGIN
   FOR i IN 1 .. 100000 LOOP
     SELECT dbms_crypto.randombytes(25)
     INTO x
     FROM dual;

     INSERT INTO t1 VALUES (x);
     INSERT INTO t2 VALUES (x);
   END LOOP;
   COMMIT;
END;
/

-- some sample rows:
3605CAA721159CAC4E462B841419CCB7390F1AE3484FF14963
05B7AE0B6BB076EEAF3E8E7DBA1BE9D5C8F97737AA1FDF21A5
40756BCEBF00CCB80ACA5F4F6BF3AFE6BC19D19EA74F10212B
234812A15930421A208BCF19C943762B5FA11D0C0C7E811F5E
4177AFC94C248D6B6765B8CE45FE3E49E2E5456BA6BA48C147

exec dbms_stats.gather_table_stats(USER, 'T1');
exec dbms_stats.gather_table_stats(USER, 'T2');

SELECT table_name, blocks
FROM user_tables
WHERE table_name IN ('T1', 'T2');

TABLE_NAME                         BLOCKS
------------------------------ ----------
T1                                    780
T2                                    701

Hmmmmm.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/8/2008 5:58:39 AM
Mark Townsend wrote:
> 
>>>
>>> What I'm saying is that it is not just superficial ... from a 
>>> technical point-of-view it is mostly incorrect. He needs to learn how 
>>> to read the
>>> on-line docs.
>> Well the nice thing about BLOGs is the ability to give feedback.
>> You are a teacher, no? Teach!
>>
>> Cheers
>> Serge
> 
> Eaton's blog is basically correct. However he mentions the negatives of 
> the Oracle design, and does not mention the advantages. The greatest 
> advantage of having the symbol table local to the page is that you do 
> not have to do another I/O to read the symbol table - the single block 
> I/O gets the compressed data and the symbols required to uncompress it. 
> Our testing in large scale environments shows that to me more beneficial 
> than the IBM design. 
The compression dictionary is part of the table's meta-data, thus it is 
cached. There is no I/O after the first touch.

> There are also additional benefits accrued from not 
> having to expand a single symbol table when the data values grow, etc.
That works both way. Harms vs. benefit depend on the distribution of the 
data. If data values are highly localized, then having local 
dictionaries can be helpful. Otherwise they inevitably also cause a 
great deal of repetition.

> So it's horses for courses. We just think our horses are better. YMMV, 
> and as always, test well.
Amen, and that's what keeps our jobs interesting.

Cheers
Serge

-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/8/2008 12:51:18 PM
Mark Townsend wrote:
> 
>>>
>>> What I'm saying is that it is not just superficial ... from a 
>>> technical point-of-view it is mostly incorrect. He needs to learn how 
>>> to read the
>>> on-line docs.
>> Well the nice thing about BLOGs is the ability to give feedback.
>> You are a teacher, no? Teach!
>>
>> Cheers
>> Serge
> 
> Eaton's blog is basically correct. However he mentions the negatives of 
> the Oracle design, and does not mention the advantages. The greatest 
> advantage of having the symbol table local to the page is that you do 
> not have to do another I/O to read the symbol table - the single block 
> I/O gets the compressed data and the symbols required to uncompress it. 
> Our testing in large scale environments shows that to me more beneficial 
> than the IBM design. There are also additional benefits accrued from not 
> having to expand a single symbol table when the data values grow, etc.
> 
> So it's horses for courses. We just think our horses are better. YMMV, 
> and as always, test well.

I disagree Mark ... unless I know something I'm not supposed to talk
about (so I won't). Yes the symbol table is on a single-block basis.
But the critical information missing from Eaton's blog is the
method used to keep compression from affecting insert performance.
And of course no mention of compressed tablespaces, no mention of
compression without direct path loading, no mention of deduplication,
no mention of compressed indexes, etc.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/8/2008 3:17:26 PM
DA Morgan wrote:
> Mark Townsend wrote:
>>
>>>>
>>>> What I'm saying is that it is not just superficial ... from a 
>>>> technical point-of-view it is mostly incorrect. He needs to learn 
>>>> how to read the
>>>> on-line docs.
>>> Well the nice thing about BLOGs is the ability to give feedback.
>>> You are a teacher, no? Teach!
>>>
>>> Cheers
>>> Serge
>>
>> Eaton's blog is basically correct. However he mentions the negatives 
>> of the Oracle design, and does not mention the advantages. The 
>> greatest advantage of having the symbol table local to the page is 
>> that you do not have to do another I/O to read the symbol table - the 
>> single block I/O gets the compressed data and the symbols required to 
>> uncompress it. Our testing in large scale environments shows that to 
>> me more beneficial than the IBM design. There are also additional 
>> benefits accrued from not having to expand a single symbol table when 
>> the data values grow, etc.
>>
>> So it's horses for courses. We just think our horses are better. YMMV, 
>> and as always, test well.
> 
> I disagree Mark ... unless I know something I'm not supposed to talk
> about (so I won't). Yes the symbol table is on a single-block basis.
> But the critical information missing from Eaton's blog is the
> method used to keep compression from affecting insert performance.
> And of course no mention of compressed tablespaces, no mention of
> compression without direct path loading, no mention of deduplication,
> no mention of compressed indexes, etc.
Daniel,

I give you index compression.
Compression without direct path loading in 11g is catch-up.
DB2 9 shipped with it from the start. In DB2 9.5 the system also 
automagically create a dictionary once a threshhold in table size is 
reached. No initial reorg, load, nothing....

Can't comment on the other points as I don't know their meaning in the 
context.

Are logs compressed? What about the pages in the buffer pool? We found 
that the effective doubling in bufferpool has significant impact on 
either cost of the system (reduced main memory requirements) or 
performance (better hit ratio).

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/8/2008 4:37:30 PM
Serge Rielau wrote:

> Can't comment on the other points as I don't know their meaning in the 
> context.

ask <g>

> Are logs compressed?
> What about the pages in the buffer pool? We found 
> that the effective doubling in bufferpool has significant impact on 
> either cost of the system (reduced main memory requirements) or 
> performance (better hit ratio).

As I don't know if I know something I shouldn't talk about I am going
to avoid too much here. If Mark wants to answer I will leave that to
him.

I find better compression and far better relative performance using
Oracle's solution. The thing that I think Mark's team has nailed is
when to perform the compression.

Consider this ... you have a new, empty, table. You insert one row.
What are you going to compress? Insert a second row. What are you
going to compress? Fill the block. Ask the question again.

No matter how you do it, to get back to the original point of the
thread, it is a lousy reason to consider changing the back-end from
product A to product B given what I've seen of both.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/8/2008 6:22:01 PM
DA Morgan wrote:
> Serge Rielau wrote:
> 
>> Can't comment on the other points as I don't know their meaning in the 
>> context.
> 
> ask <g>
I did.

> As I don't know if I know something I shouldn't talk about I am going
> to avoid too much here. If Mark wants to answer I will leave that to
> him.
And that was your answer...

> I find better compression and far better relative performance using
> Oracle's solution. The thing that I think Mark's team has nailed is
> when to perform the compression.
Well spit it out. What did you test? Note that DB2 has no DeWitt clause.
No-one from IBM can come after you for posting your experience.

> Consider this ... you have a new, empty, table. You insert one row.
> What are you going to compress? Insert a second row. What are you
> going to compress? Fill the block. Ask the question again.
Why would you want to compress a small near empty table?
There is virtually no benefit.

> No matter how you do it, to get back to the original point of the
> thread, it is a lousy reason to consider changing the back-end from
> product A to product B given what I've seen of both.
I don't think it is a lousy reason. It is an insufficient reason in 
isolation.
Let's give Charles some credit and assume that he is looking at more 
variables than these two (He also asked about patch certification).

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/8/2008 7:47:42 PM
Serge Rielau wrote:
> DA Morgan wrote:
>> Serge Rielau wrote:
>>
>>> Can't comment on the other points as I don't know their meaning in 
>>> the context.
>>
>> ask <g>
> I did.
> 
>> As I don't know if I know something I shouldn't talk about I am going
>> to avoid too much here. If Mark wants to answer I will leave that to
>> him.
> And that was your answer...

For the time being.

>> I find better compression and far better relative performance using
>> Oracle's solution. The thing that I think Mark's team has nailed is
>> when to perform the compression.
> Well spit it out. What did you test? Note that DB2 has no DeWitt clause.
> No-one from IBM can come after you for posting your experience.

Without a letter authorizing me to do so from both Oracle and IBM
legal ... and this request from a guy who was afraid to use a website
.... surely you jest.

>> Consider this ... you have a new, empty, table. You insert one row.
>> What are you going to compress? Insert a second row. What are you
>> going to compress? Fill the block. Ask the question again.
> Why would you want to compress a small near empty table?
> There is virtually no benefit.

I wouldn't. Something Oracle's methodology handles perfectly. Make
it part of the metadata, as with DB2, and I think that is, perhaps,
part of the issue.

>> No matter how you do it, to get back to the original point of the
>> thread, it is a lousy reason to consider changing the back-end from
>> product A to product B given what I've seen of both.
> I don't think it is a lousy reason. It is an insufficient reason in 
> isolation.

You just agreed with me.

> Let's give Charles some credit and assume that he is looking at more 
> variables than these two (He also asked about patch certification).

If there are other variables they were, and remain, unstated.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/8/2008 9:39:56 PM
DA Morgan wrote:
> Serge Rielau wrote:
>> DA Morgan wrote:
>>> Serge Rielau wrote:
>>>> DA Morgan wrote:
>>>>> Serge Rielau wrote:
>>>>>> DA Morgan wrote:
>>>>>>> Serge Rielau wrote:
>>>>>>>> DA Morgan wrote:
>>>>>>>>> My testing of Oracle compression versus DB2 compression
>>>>>>>>> indicates that
>>>>>>>>> DB2 has some catching up to do.
>>>>>>>> Interesting, care to elaborate? Always thriving to improve...
>>>>>>>> Given Oracle's DeWitt clause I'll gladly accept any information
>>>>>>>> you have by email as to not get you into trouble.
>>>>>>> Going to have to disappoint you this time for reasons I am sure
>>>>>>> you can
>>>>>>> appreciate.
>>>>>> So an unsubstantiated claim... Aside you should use DB2 9.5 to
>>>>>> compare to Oracle 11
>>>>>
>>>>> If I'd used Oracle 11 I would have.

You're still experimenting in 10g after telling the world to move to 11g 
a.s.a.p.?

>>>>>
>>>>> But as long as you are actively participating how about a brief
>>>>> technical presentation from the expert explaining to us how
>>>>> compression in DB2 differs from that in Oracle 10g and 11g.

You didn't study DB2's compression before comparing it with Oracle's?
That makes your claims even more unsubstantiated and even biased.

>>>>>
>>>>> Please include also information as to how headers are designed
>>>>> and how blocks are filled? Top down as in Informix for both
>>>>> Top-Down and Bottom-Up as in Oracle? Or some other means.
>>>> http://blogs.ittoolbox.com/database/technology/archives/compression-in-11g-vs-db2-9-21173
>>>>
>>>> Maybe we can also take the discussion there....
>>>>
>>>> Cheers
>>>> Serge
>>>
>>> I would but there is a rather substantial problem: Chris Eaton's
>>> understanding of Oracle compression appears to not even include
>>> the marketing materials much less the rather substantial discussion
>>> of the concepts and architecture.

Mark T. from Oracle Marketing seems to disagree with you ...

>>>
>>> What I'm saying is that it is not just superficial ... from a
>>> technical point-of-view it is mostly incorrect. He needs to learn
>>> how to read the
>>> on-line docs.
>> Well the nice thing about BLOGs is the ability to give feedback.
>> You are a teacher, no? Teach!
>>
>> Cheers
>> Serge
>
> Teach someone from IBM? I wouldn't presume to do such a thing.
>

So much for your claim of being an "independent instructor" ...

Afraid he will teach you a lesson or two about DB2 compression?
(or maybe even Oracle's ...)

-- 
Jeroen 


0
usenet1271 (76)
2/8/2008 11:34:27 PM
> The compression dictionary is part of the table's meta-data, thus it is cached.

At some stage this is going to hit a memory limit, as more data and more 
tables are compressed.


 > If data values are highly localized, then having local
> dictionaries can be helpful.

That IS the crux of the matter. We think most data is indeed localized, 
I would be surprised, and willing to discuss over a beer, if IBM thought 
differently.

BTW - that is why random data generation does not show compression of 
well in Oracle.

>> No matter how you do it, to get back to the original point of the
>> thread, it is a lousy reason to consider changing the back-end from
>> product A to product B given what I've seen of both.
> I don't think it is a lousy reason. It is an insufficient reason in isolation. 

And interesting enough, the whole conversation has been largely 
irrelevant. The real question to ask - given product A and product B, 
under SAP, which database takes more space to store the same amount of 
data ? Firstly uncompressed, and then compressed. The answer is freely 
available to anyone with a SAP support contract.

Remember that necessity is often the mother of invention, and what is a 
feature in one product is often a bug/design fix in another.
0
2/9/2008 4:43:54 AM
Serge Rielau wrote:

> 
> Are logs compressed? 

Not compressed separately, if that's what you mean. We will compress 
them when transmitting to a standby if the network is not able to keep 
up with the transmission rate.

> What about the pages in the buffer pool? We found 
> that the effective doubling in bufferpool has significant impact on 
> either cost of the system (reduced main memory requirements) or 
> performance (better hit ratio).
> 

Will look forward to seeing compression used in some of the TPC's then ?
0
2/9/2008 4:58:38 AM
> 
>> What about the pages in the buffer pool? We found that the effective 
>> doubling in bufferpool has significant impact on either cost of the 
>> system (reduced main memory requirements) or performance (better hit 
>> ratio).
>>
> 
> Will look forward to seeing compression used in some of the TPC's then ?

Oh - and to answer the first part of the question "What about the pages 
in the buffer pool?" - Yes.
0
2/9/2008 5:05:26 AM
Mark Townsend wrote:
>> The compression dictionary is part of the table's meta-data, thus it 
>> is cached.
> 
> At some stage this is going to hit a memory limit, as more data and more 
> tables are compressed.
> 
> 
>  > If data values are highly localized, then having local
>> dictionaries can be helpful.
> 
> That IS the crux of the matter. We think most data is indeed localized, 
> I would be surprised, and willing to discuss over a beer, if IBM thought 
> differently.
> 
> BTW - that is why random data generation does not show compression of 
> well in Oracle.

The fact that it achieves as high a compression as it does is actually
quite impressive which was the point of my demo.

>>> No matter how you do it, to get back to the original point of the
>>> thread, it is a lousy reason to consider changing the back-end from
>>> product A to product B given what I've seen of both.
>> I don't think it is a lousy reason. It is an insufficient reason in 
>> isolation. 
> 
> And interesting enough, the whole conversation has been largely 
> irrelevant. The real question to ask - given product A and product B, 
> under SAP, which database takes more space to store the same amount of 
> data ? Firstly uncompressed, and then compressed. The answer is freely 
> available to anyone with a SAP support contract.

Though I still think that far less important to management than the size
of the pool of potential employees that can manage that infrastructure
not just today but 5+ years from now when the current crop of gray-hair
no-hair DBAs hit retirement age.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/9/2008 5:22:42 AM
On Feb 9, 12:22=A0am, DA Morgan <damor...@psoug.org> wrote:
> Mark Townsend wrote:
> >> The compression dictionary is part of the table's meta-data, thus it
> >> is cached.
>
> > At some stage this is going to hit a memory limit, as more data and more=

> > tables are compressed.
>
> > =A0> If data values are highly localized, then having local
> >> dictionaries can be helpful.
>
> > That IS the crux of the matter. We think most data is indeed localized,
> > I would be surprised, and willing to discuss over a beer, if IBM thought=

> > differently.
>
> > BTW - that is why random data generation does not show compression of
> > well in Oracle.
>
> The fact that it achieves as high a compression as it does is actually
> quite impressive which was the point of my demo.
>
> >>> No matter how you do it, to get back to the original point of the
> >>> thread, it is a lousy reason to consider changing the back-end from
> >>> product A to product B given what I've seen of both.
> >> I don't think it is a lousy reason. It is an insufficient reason in
> >> isolation.
>
> > And interesting enough, the whole conversation has been largely
> > irrelevant. The real question to ask - given product A and product B,
> > under SAP, which database takes more space to store the same amount of
> > data ? Firstly uncompressed, and then compressed. The answer is freely
> > available to anyone with a SAP support contract.
>
> Though I still think that far less important to management than the size
> of the pool of potential employees that can manage that infrastructure
> not just today but 5+ years from now when the current crop of gray-hair
> no-hair DBAs hit retirement age.
> --
> Daniel A. Morgan
> Oracle Ace Director & Instructor
> University of Washington
> damor...@x.washington.edu (replace x with u to respond)
> Puget Sound Oracle Users Groupwww.psoug.org- Hide quoted text -
>
> - Show quoted text -
My company is mingrating  fromDB2 to Oracle for SAP because there are
many morep eople who know Oracle than DB2. SAP was developed to be
"database independent",but I  read they provide  libraries  of their
databse code
optimized for Oracle as well. Therearemany  implementaions of SAP
using Oracle. I doubut decision for choosing BD2 vs Oracle iseverbased
on compression or one ortwo fetaures.
0
zigzagdna (350)
2/9/2008 5:58:24 AM
zigzagdna@yahoo.com wrote:
> On Feb 9, 12:22 am, DA Morgan <damor...@psoug.org> wrote:
>> Mark Townsend wrote:
>>>> The compression dictionary is part of the table's meta-data, thus it
>>>> is cached.
>>> At some stage this is going to hit a memory limit, as more data and more
>>> tables are compressed.
>>>  > If data values are highly localized, then having local
>>>> dictionaries can be helpful.
>>> That IS the crux of the matter. We think most data is indeed localized,
>>> I would be surprised, and willing to discuss over a beer, if IBM thought
>>> differently.
>>> BTW - that is why random data generation does not show compression of
>>> well in Oracle.
>> The fact that it achieves as high a compression as it does is actually
>> quite impressive which was the point of my demo.
>>
>>>>> No matter how you do it, to get back to the original point of the
>>>>> thread, it is a lousy reason to consider changing the back-end from
>>>>> product A to product B given what I've seen of both.
>>>> I don't think it is a lousy reason. It is an insufficient reason in
>>>> isolation.
>>> And interesting enough, the whole conversation has been largely
>>> irrelevant. The real question to ask - given product A and product B,
>>> under SAP, which database takes more space to store the same amount of
>>> data ? Firstly uncompressed, and then compressed. The answer is freely
>>> available to anyone with a SAP support contract.
>> Though I still think that far less important to management than the size
>> of the pool of potential employees that can manage that infrastructure
>> not just today but 5+ years from now when the current crop of gray-hair
>> no-hair DBAs hit retirement age.
>> --
>> Daniel A. Morgan
>> Oracle Ace Director & Instructor
>> University of Washington
>> damor...@x.washington.edu (replace x with u to respond)
>> Puget Sound Oracle Users Groupwww.psoug.org- Hide quoted text -
>>
>> - Show quoted text -
> My company is mingrating  fromDB2 to Oracle for SAP because there are
> many morep eople who know Oracle than DB2. SAP was developed to be
> "database independent",but I  read they provide  libraries  of their
> databse code
> optimized for Oracle as well. Therearemany  implementaions of SAP
> using Oracle. I doubut decision for choosing BD2 vs Oracle iseverbased
> on compression or one ortwo fetaures.

That's the point I've been making but IBM is deaf.

The truth is that the mainframe dinosaurs, a tribe too which I once
belonged is aging itself out of the workforce. My years of experience
with punch cards, mag tape, Fortran and COBOL puts me only a single
decade ahead of the DB2 crowd which will begin retiring in droves during
this decade. No one cares about 360s, 370s, Amdahl 470s, or greenbar.

There are no schools that teach DB2, no universities, no colleges, no
trade schools, etc. And if you offered a DB2 class at the university
where I teach the only person in the room on the first day of class
would be the instructor. DB2 will become extinct because the last
practitioner will be in a walker or wheelchair ... not because of a
compression algorithm.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/9/2008 6:43:28 AM
Mark Townsend wrote:
>> The compression dictionary is part of the table's meta-data, thus it 
>> is cached.
> 
> At some stage this is going to hit a memory limit, as more data and more 
> tables are compressed.
In DB2 the size of the dictionary for a table is fixed. It does not grow 
with the size of the data. Data that is repetitive for a 1GB table is 
still repetitive for a 1TB table.
The ranges of data values in a column or row (DB2's compression is 
independent of column boundaries) do not increase as the amount of data 
increases once a sufficient size is reached.
Aside. The savings in buffer pool footprint make up for the dictionary 
many times over.
>  > If data values are highly localized, then having local
>> dictionaries can be helpful.
> That IS the crux of the matter. We think most data is indeed localized, 
> I would be surprised, and willing to discuss over a beer, if IBM thought 
> differently.
I can't speak for IBM, but I think differently.
Data in a page is typically a random collection.
Lets take an orders table. Addresses and names of customers will be random.
You may have the occasional spike in order items over time (Plywood in 
hurricane season? But Overall most data in a page is spread 
independently of its physical location.
You will need to go to multidimensional clustering (nested partitions in 
Oracle speak) before you see significant localization of values. But 
even then only the clustering data will be localized (e.g. MONTH, REGION).
What we do see is skew over time. For that reason dictionaries are local 
to range partitions.

> BTW - that is why random data generation does not show compression of 
> well in Oracle.
Globally random is artificial. But random within a range, not so sure...
To link to teh TPC comment in another post, We have tested TPC-H with 
compression and found DB2 to do quite well.
TPC-C is too artificial (old). TPC-E will be much better as it is based 
on real market data.

> Remember that necessity is often the mother of invention, and what is a 
> feature in one product is often a bug/design fix in another.
You are correct in general.
Reminds me on the discussion whether package variables can be public or 
not. Necessity turns to virtue turns to differentiator.

Wars (Flame here and real elsewhere) could be avoided if people would 
got more perspective.
You learn a lot about yourself, your culture, your country, your race, 
your DBMS of choice, ... by trying to immerse in another instead of 
merely yelling over the fence.

There - we've gone all philosophical now...
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/9/2008 4:10:05 PM
Mark Townsend wrote:
>>> What about the pages in the buffer pool? We found that the effective 
>>> doubling in bufferpool has significant impact on either cost of the 
>>> system (reduced main memory requirements) or performance (better hit 
>>> ratio).
>> Will look forward to seeing compression used in some of the TPC's then ?
> 
> Oh - and to answer the first part of the question "What about the pages 
> in the buffer pool?" - Yes.
See my other post. I'm also looking forward to compression in TPC.

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/9/2008 4:11:56 PM
Serge Rielau wrote:

> Data in a page is typically a random collection.
> Lets take an orders table. Addresses and names of customers will be random.

To some extent this may be true but a decent statistician will tell you
otherwise.

Let's take, for example a simple table belonging to an Oracle Users Group.

CREATE TABLE test AS
SELECT sys_op_map_nonnull(per_address1)
FROM person
WHERE per_address1 IS NOT NULL;

SQL> SELECT COUNT(*) FROM test;

   COUNT(*)
----------
       1559

SQL> SELECT COUNT(*)
   2  FROM test
   3  WHERE INSTR(testcol, '000', 1, 1) <> 0;

   COUNT(*)
----------
        252

SQL> SELECT COUNT(*)
   2  FROM test
   3  WHERE INSTR(testcol, '4E4', 1, 1) <> 0;

   COUNT(*)
----------
        325

SQL>  exec dbms_stats.gather_table_stats(USER, 'TEST');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(USER, 'TESTCOMP');

PL/SQL procedure successfully completed.

SQL>  SELECT table_name, blocks
   2  FROM user_tables
   3  WHERE table_name IN ('TEST', 'TESTCOMP');

TABLE_NAME                         BLOCKS
------------------------------ ----------
TEST                                    9
TESTCOMP                                8

It isn't 3:1 compression but it isn't insignificant either.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/9/2008 8:13:42 PM
DA Morgan wrote:
> Serge Rielau wrote:
> 
>> Data in a page is typically a random collection.
>> Lets take an orders table. Addresses and names of customers will be 
>> random.
> 
> To some extent this may be true but a decent statistician will tell you
> otherwise.
> 
> Let's take, for example a simple table belonging to an Oracle Users Group.
> 
> CREATE TABLE test AS
> SELECT sys_op_map_nonnull(per_address1)
> FROM person
> WHERE per_address1 IS NOT NULL;
> 
> SQL> SELECT COUNT(*) FROM test;
> 
>   COUNT(*)
> ----------
>       1559
> 
> SQL> SELECT COUNT(*)
>   2  FROM test
>   3  WHERE INSTR(testcol, '000', 1, 1) <> 0;
> 
>   COUNT(*)
> ----------
>        252
> 
> SQL> SELECT COUNT(*)
>   2  FROM test
>   3  WHERE INSTR(testcol, '4E4', 1, 1) <> 0;
> 
>   COUNT(*)
> ----------
>        325
> 
> SQL>  exec dbms_stats.gather_table_stats(USER, 'TEST');
> 
> PL/SQL procedure successfully completed.
> 
> SQL> exec dbms_stats.gather_table_stats(USER, 'TESTCOMP');
> 
> PL/SQL procedure successfully completed.
> 
> SQL>  SELECT table_name, blocks
>   2  FROM user_tables
>   3  WHERE table_name IN ('TEST', 'TESTCOMP');
> 
> TABLE_NAME                         BLOCKS
> ------------------------------ ----------
> TEST                                    9
> TESTCOMP                                8
> 
> It isn't 3:1 compression but it isn't insignificant either.
Daniel I fail to see how your test relates to my hypothesis (or Marks).
I did not state that data values are random overall. I stated that there 
presence in particular blocks is random: I.e. it is not clustered 
according to its physical location (block).
To debunk my hypothesis you would need to show that the data data in the 
8 blocks has statistically different distribution between blocks. That 
would support benefits of localized compression dictionaries to 
accommodate for it.
Of course 8 blocks and a few thousand rows are insufficient for any 
meaningful measurement on the topic.

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/9/2008 9:28:02 PM
Serge Rielau wrote:
> DA Morgan wrote:
>> Serge Rielau wrote:
>>
>>> Data in a page is typically a random collection.
>>> Lets take an orders table. Addresses and names of customers will be 
>>> random.
>>
>> To some extent this may be true but a decent statistician will tell you
>> otherwise.
>>
>> Let's take, for example a simple table belonging to an Oracle Users 
>> Group.
>>
>> CREATE TABLE test AS
>> SELECT sys_op_map_nonnull(per_address1)
>> FROM person
>> WHERE per_address1 IS NOT NULL;
>>
>> SQL> SELECT COUNT(*) FROM test;
>>
>>   COUNT(*)
>> ----------
>>       1559
>>
>> SQL> SELECT COUNT(*)
>>   2  FROM test
>>   3  WHERE INSTR(testcol, '000', 1, 1) <> 0;
>>
>>   COUNT(*)
>> ----------
>>        252
>>
>> SQL> SELECT COUNT(*)
>>   2  FROM test
>>   3  WHERE INSTR(testcol, '4E4', 1, 1) <> 0;
>>
>>   COUNT(*)
>> ----------
>>        325
>>
>> SQL>  exec dbms_stats.gather_table_stats(USER, 'TEST');
>>
>> PL/SQL procedure successfully completed.
>>
>> SQL> exec dbms_stats.gather_table_stats(USER, 'TESTCOMP');
>>
>> PL/SQL procedure successfully completed.
>>
>> SQL>  SELECT table_name, blocks
>>   2  FROM user_tables
>>   3  WHERE table_name IN ('TEST', 'TESTCOMP');
>>
>> TABLE_NAME                         BLOCKS
>> ------------------------------ ----------
>> TEST                                    9
>> TESTCOMP                                8
>>
>> It isn't 3:1 compression but it isn't insignificant either.
> Daniel I fail to see how your test relates to my hypothesis (or Marks).
> I did not state that data values are random overall. I stated that there 
> presence in particular blocks is random: I.e. it is not clustered 
> according to its physical location (block).

Depends on the data. Have you looked at bank data? phone company data?
online reseller data? It isn't as random as you appear to think.
Consider, for example, the clustering factor on an index. That same
table that has address information to which you refer also has cities,
state/provinces, first names, and lots of other very predictably
repeating values. And, yes, at the block level.

What I am surprised you have not yet given voice to is the implication,
in Oracle, of using small blocks versus large blocks.

> To debunk my hypothesis you would need to show that the data data in the 
> 8 blocks has statistically different distribution between blocks. That 
> would support benefits of localized compression dictionaries to 
> accommodate for it.

Are you familiar with Benford's Law? If not:
http://users.skynet.be/albert.frank/benfordslaw2.htm

> Of course 8 blocks and a few thousand rows are insufficient for any 
> meaningful measurement on the topic.
> 
> Cheers
> Serge

If I wasn't building a Customer Service application this month for one
of Oracle's new customers ... a publicly traded public utility ... F5
load balancers, clustered app servers, RAC clusters, etc., and preparing
to speak at the Rocky Mountain OUG and Northern California OUG in the
next two weeks I'd further accommodate you. <g>

I'll be in Redwood Shores on the 18th of the month. Perhaps you could
fly in and join us.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/10/2008 1:24:26 AM
DA Morgan wrote:
> Depends on the data. Have you looked at bank data? phone company data?
> online reseller data? It isn't as random as you appear to think.
> Consider, for example, the clustering factor on an index. That same
> table that has address information to which you refer also has cities,
> state/provinces, first names, and lots of other very predictably
> repeating values. And, yes, at the block level.
Are you saying that Firstnames strongly correlate with your clustering?
Streetnames? What do you think is better:
Store "Main Street" and "Smith" in a global dictionary or repeating it 
in every block.

> What I am surprised you have not yet given voice to is the implication,
> in Oracle, of using small blocks versus large blocks.
You teach. I comment.

-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/10/2008 1:44:21 AM
On Feb 8, 2:04 pm, Serge Rielau <srie...@ca.ibm.com> wrote:



> It does? Where does it say that it is measuring application dialog CPU use?

section 6.4 and 6.5. The name of section
6.5 is precisely:
Dialog Response Times, flowing from a
transaction measure.

As such, why are you asking me instead of reading
like I asked you to do?

> It's measuring three values:
> CPU time
>   Which consistently is SERVER.

NO.  NOWHERE does it say it is the
DB server CU that is beibg measured.  In fact, it
very clearly in section 6.4 says it's the application
CPU usage!

WTH has that got to do with db2 compression?


>   While that is not spelled out explicitly all of section 6.3 is pretty
>   clear about it.
> Response Time

of WHAT?
By itself, that means exactly jack and you know it.


>    Wall clock time consumed on the server
>    (I hope you take no issue with CPU time > Wall clock time)

What I have issue with is the word "server"
used to describe CPU time in a multi-tier
system:

WHICH damn server?


> I agree. But what substantiates your claim that this is what it says?

I dunno, maybe the words:
"The values were taken from transaction ST06"?
Is that a new function in db2 or is that, like I said,
app server code?

BTW, what substantiates yours that it isn't?


> It is unfortunate that the paper assumed a reader who doesn't actually
> try to misunderstand the obvious definition for the purpose of
> discrediting it.

It is most unfortunate that once again IBM
marketing has produced a paper that would cause
a group of unbiased professional engineers to
convulse with laughter.

When one publishes a performance paper,
one DEFINES the baseline that is being
measured and its location.  That has not
been done here.


> By CPU Usage SAP?

By CPU usage of ST06, for example.
Is that now part of DB2 as well?

No, it isn't.  Therefore, it is the
CPU usage of SAP.

Capice?


> Please, perhaps my English is really bad.. point to your source so I can
> learn. Don't leave me ignorant.

Basic high school would do for a start.


> Teh server side CPU changes, yes. No claims made on the app.

Yes, claims made on the app.  Read the
darn paper, instead of imagining what
it should say!


> I took the liberty of checking out the author.
> Just as the keyword "PRODUCTION" in the title might suggest he is indeed
> a DBA.
> This is "his" system. Note that he is looking at long term results, not
> some 3 hours run. The graph itself covers a month.

Yeah, sure.

And whoever in IBM marketing chose to carry this
didn't even do the most basic of checks.

Like:

1- Is the paper using a credible methodology?
2- Is the paper providing results that can be reproduced
in other cases?
3- Does the paper clearly identify what is the subject
being measured?


This one fails on all three basic counts.




Looks like in the usual Feb-March marketing
rush to "prove" that db2 is ahead of Oracle
(must be reckoning time again, eh?)
someone forgot that IBM's motto is

THINK,

not
THICK
....
0
wizofoz2k (1386)
2/10/2008 12:05:35 PM
On Feb 9, 5:43 pm, DA Morgan <damor...@psoug.org> wrote:

>
> There are no schools that teach DB2, no universities, no colleges, no
> trade schools, etc. And if you offered a DB2 class at the university

I'd settle for finding ONE book
about it in the bookshops over here...

ONE single book.
..
..
..
0
wizofoz2k (1386)
2/10/2008 12:08:45 PM
I suppose we both made our closing arguments.
May others be the jury

-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/10/2008 1:43:52 PM
Noons wrote:
> On Feb 9, 5:43 pm, DA Morgan <damor...@psoug.org> wrote:
> 
>> There are no schools that teach DB2, no universities, no colleges, no
>> trade schools, etc. And if you offered a DB2 class at the university
> 
> I'd settle for finding ONE book
> about it in the bookshops over here...
> 
> ONE single book.
Welcome to the internet age:
http://www.redbooks.ibm.com

Cost to you: zero dollars
Value: Priceless

Search for Oracle, or DB2, or .. pick your stuff. Download PDF.

Cheers
Serge
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/10/2008 2:02:10 PM
Serge Rielau wrote:
> DA Morgan wrote:
>> Depends on the data. Have you looked at bank data? phone company data?
>> online reseller data? It isn't as random as you appear to think.
>> Consider, for example, the clustering factor on an index. That same
>> table that has address information to which you refer also has cities,
>> state/provinces, first names, and lots of other very predictably
>> repeating values. And, yes, at the block level.
> Are you saying that Firstnames strongly correlate with your clustering?
> Streetnames? What do you think is better:
> Store "Main Street" and "Smith" in a global dictionary or repeating it 
> in every block.

I am saying that while street names may be random, though often they are
not. The use of North, South, East, West, Avenue, etc. is highly
predictable. So are common first names and a lot of other information.
One example ... in my city every phone number begins with 206-232 or
206-236.

>> What I am surprised you have not yet given voice to is the implication,
>> in Oracle, of using small blocks versus large blocks.
> You teach. I comment.

I know ... Mark can comment if he wishes. <g>
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/10/2008 7:08:28 PM
Serge Rielau wrote:
> I suppose we both made our closing arguments.
> May others be the jury

The jury is the marketplace.

And my guess is that in ten years they will vote with their
HR departments not their IT departments.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/10/2008 7:09:31 PM
Serge Rielau wrote:
> Noons wrote:
>> On Feb 9, 5:43 pm, DA Morgan <damor...@psoug.org> wrote:
>>
>>> There are no schools that teach DB2, no universities, no colleges, no
>>> trade schools, etc. And if you offered a DB2 class at the university
>>
>> I'd settle for finding ONE book
>> about it in the bookshops over here...
>>
>> ONE single book.
> Welcome to the internet age:
> http://www.redbooks.ibm.com
> 
> Cost to you: zero dollars
> Value: Priceless
> 
> Search for Oracle, or DB2, or .. pick your stuff. Download PDF.
> 
> Cheers
> Serge

Pick up a copy of any of Tom Kyte's books. There is no PDF file that is
in the same genre. How about any of the books written by Jonathan Lewis?
Cary Millsap? Any resource even remotely approaching my little Library?
Surely you don't think the next generation of DB2 DBAs and developers
are going to learn their trade from PDF files.

Truth is those files cost exactly what they are worth unless they are
accompanied by a substantial dose of tribal knowledge and experience.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/10/2008 7:15:13 PM
DA Morgan wrote:
> Serge Rielau wrote:
>> I suppose we both made our closing arguments.
>> May others be the jury
> 
> The jury is the marketplace.
> 
> And my guess is that in ten years they will vote with their
> HR departments not their IT departments.

Hmmm... You changed your predictions from 5 to 10 years... Getting more 
confident even after failure? :)

"But quite frankly if Informix is still being marketed as an
independent product five years from now I'll be shocked."

DA Morgan, 2001



-- 
Fernando Nunes
Portugal

http://informix-technology.blogspot.com
My email works... but I don't check it frequently...
0
spam6443 (4)
2/10/2008 8:04:43 PM
DA Morgan wrote:
> Serge Rielau wrote:
>> I suppose we both made our closing arguments.
>> May others be the jury
> The jury is the marketplace.
I doubt the marketplace will settle the intended meaning of "CPU Usage" 
in this particular paper... It seems a bit minute....
I do trust the readership (if not the loudest contributers) of this 
group to judge the matter however.

Cheers
Serge

-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/10/2008 10:59:44 PM
Serge Rielau wrote:
> DA Morgan wrote:
>> Serge Rielau wrote:
>>> I suppose we both made our closing arguments.
>>> May others be the jury
>> The jury is the marketplace.
> I doubt the marketplace will settle the intended meaning of "CPU Usage" 
> in this particular paper... It seems a bit minute....
> I do trust the readership (if not the loudest contributers) of this 
> group to judge the matter however.
> 
> Cheers
> Serge

Nobody gives a rip about CPU usage. Not once in my many years in IT have
I ever sat into a meeting where the CTO looked the CIO straight in the
eye and asked: "How does the CPU utilization compare?"

Decisions are made on fully burdened FTEs.

Decisions are made on amortization schedules and deprecation.

Decisions are made on the basis of auditor requirements.

Decisions are made based on cost/benefit analysis of meeting regulatory 
requirements.

Decisions are made based on finding resources to deploy and manage.

CPU usage? Couldn't care less. I need more CPUs ... I buy more CPUs.
Ahh the beauty of RAC.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/11/2008 12:45:57 AM
DA Morgan wrote:
> Serge Rielau wrote:
>> DA Morgan wrote:
>>> Serge Rielau wrote:
>>>> I suppose we both made our closing arguments.
>>>> May others be the jury
>>> The jury is the marketplace.
>> I doubt the marketplace will settle the intended meaning of "CPU 
>> Usage" in this particular paper... It seems a bit minute....
>> I do trust the readership (if not the loudest contributers) of this 
>> group to judge the matter however.
> Nobody gives a rip about CPU usage. 
If I were Noons I'd be upset to be called a nobody...
-- 
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
0
srielau (524)
2/11/2008 2:57:52 AM
Serge Rielau wrote:
> DA Morgan wrote:
>> Serge Rielau wrote:
>>> DA Morgan wrote:
>>>> Serge Rielau wrote:
>>>>> I suppose we both made our closing arguments.
>>>>> May others be the jury
>>>> The jury is the marketplace.
>>> I doubt the marketplace will settle the intended meaning of "CPU 
>>> Usage" in this particular paper... It seems a bit minute....
>>> I do trust the readership (if not the loudest contributers) of this 
>>> group to judge the matter however.
>> Nobody gives a rip about CPU usage. 
> If I were Noons I'd be upset to be called a nobody...

Noons doesn't have budget. <g>

He has my empathy and sympathy.

But he still doesn't have budget.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/11/2008 5:06:14 AM
On Feb 11, 1:57 pm, Serge Rielau <srie...@ca.ibm.com> wrote:


> > Nobody gives a rip about CPU usage.
>
> If I were Noons I'd be upset to be called a nobody...

ROFL!
0
wizofoz2k (1386)
2/11/2008 8:04:33 AM
>
> Noons doesn't have budget. <g>
>
> He has my empathy and sympathy.
>
> But he still doesn't have budget.

Heh!  I don't.  But the mob I work for
does.  Better believe it...
0
wizofoz2k (1386)
2/11/2008 8:05:17 AM
Noons wrote:
>> Noons doesn't have budget. <g>
>>
>> He has my empathy and sympathy.
>>
>> But he still doesn't have budget.
> 
> Heh!  I don't.  But the mob I work for
> does.  Better believe it...

Oh I do I do. I also believe that they are the ones targeted
by the salesforce to buy new software, new hardware, and treated
to golf outings, dinners, lunches, drinks, cruises, etc.
-- 
Daniel A. Morgan
Oracle Ace Director & Instructor
University of Washington
damorgan@x.washington.edu (replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
2/12/2008 1:25:14 AM
On Feb 12, 12:25 pm, DA Morgan <damor...@psoug.org> wrote:
> Noons wrote:
> >> Noons doesn't have budget. <g>
>
> >> He has my empathy and sympathy.
>
> >> But he still doesn't have budget.
>
> > Heh!  I don't.  But the mob I work for
> > does.  Better believe it...
>
> Oh I do I do. I also believe that they are the ones targeted
> by the salesforce to buy new software, new hardware, and treated
> to golf outings, dinners, lunches, drinks, cruises, etc.

Yup!  Very true.  Don't forget the Oracle conferences as well:
all our "senior" damagers attend.  And come back brain-washed with
so much crap we spend the next 6 months slapping them back
to reality...
0
wizofoz2k (1386)
2/13/2008 2:37:55 AM
Reply:

Similar Artilces:

Establishing connection from Oracle Database to DB2 database
Hi, Can any one help me in connecting oracle database to DB2 database oracle is on unix and DB2 is on AS400. Please guide me step by step on the same. Waiting for reply. Its critical umeshchoudhary@gmail.com (U C) wrote in news:d06590c8.0504252348.71a5c6f8@posting.google.com: > Hi, > Can any one help me in connecting oracle database to DB2 database > oracle is on unix and DB2 is on AS400. Please guide me step by step on > the same. > > Waiting for reply. Its critical > use PERL between the two DBs. >Hi, > Can any one help me in connecting oracle database to DB2 database >oracle is on unix and DB2 is on AS400. Please guide me step by step on >the same. >Waiting for reply. Its critical http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:4406709207206#18830681837358 I have used this to example to connect to DB2. chet ...

Database Database Database Database Software Cheap
Database Database Database Database Software Cheap Great Datase Software See Website Below. Ultra Easy to Learn (Typically 30 Seconds) Professional Programmable Database Ver. 2.3 2.1 Million Record Capacity, (New cond). Search Rate: 2000 / Records / Second. DataBase Type: Random Access. Can Create Unlimited Databases. Programmable fields for any Application. Has Six Seperate Field Sets All Programmable. Build Time One Second, (Auto Creates DB). Setup Time: Instantly, Just Enter DB Name. Ultra Cheap Price, Special $20, Paypal Accepted. Application Mailed Instantly (file Attached Email). http://www.vehiclerepair.org/dbPro/dbpro.html ...

Database Database Database Database Software Cheap
Database Database Database Database Great Datase Software See Website Below. Ultra Easy to Learn (Typically 30 Seconds) Professional Programmable Database Ver. 2.3 2.1 Million Record Capacity, (New cond). Search Rate: 2000 / Records / Second. DataBase Type: Random Access. Can Create Unlimited Databases. Programmable fields for any Application. Has Six Seperate Field Sets All Programmable. Build Time One Second, (Auto Creates DB). Setup Time: Instantly, Just Enter DB Name. Ultra Cheap Price, Special $20, Paypal Accepted. Application Mailed Instantly (file Attached Email). http://www.vehiclerepair.org/dbPro/dbpro.html ...

a database, is a database, is a err database
How many times can we see the same request from someone who wants to access data from a 'pick' database through what has come to be 'standard' practices (odbc, oledb) and still get the same old sloppy ' buy this proprietary utility (and above all, my services)' answer. I think most of these pick flavors should have some sort of layer (by now!) to handle this; If someone needs to do this, the service is really 'education' i.e to show them how. Lets cut the shit now and stop with this tired and silly BS and sad marketing schlock. Regards, -Jim Jim wrote: > How many times can we see the same request from someone who wants to > access data from a 'pick' database through what has come to be > 'standard' practices (odbc, oledb) > and still get the same old sloppy ' buy this proprietary utility (and > above all, my services)' answer. I think most of these pick flavors > should have some sort of layer (by now!) to handle this; If someone > needs to do this, the service is really 'education' i.e to show them > how. Lets cut the shit now and stop with this tired and silly BS and > sad marketing schlock. > > Regards, > > -Jim Jim who? I wonder? What is this? An attack on capitalism? Providing services for those who perhaps lack the time, skill, or knowledge to perform such tasks is hardly a crime. Maybe "standard" odbc and...

Add database to existing oracle database
Hi experts, I am new in oracle.I migrated one database from ms-access to oracle 9i using oracle migration utility on my local machine.But now i need to add this oracle database from my local machine to existing main database oracle 8i on main server.Pls. tell how i can add this database from my machine to main server. I think i can use export/import utility.But i never used this utility before, pls. tell what should i need to do.And if export/import is good way to migrate database pls. explain me in steps. I'll really appreciate your kind help. Pls. reply me soon, it's urgent!!! Thanks..so much Sonika "sonika" <sonika_sehrawat@yahoo.com> wrote in message news:709618ea.0409281225.1a09da58@posting.google.com... > Hi experts, > > I am new in oracle.I migrated one database from ms-access to oracle 9i > using oracle migration utility on my local machine.But now i need to > add this oracle database from my local machine to existing main > database oracle 8i on main server.Pls. tell how i can add this > database from my machine to main server. > I think i can use export/import utility.But i never used this utility > before, pls. tell what should i need to do.And if export/import is > good way to migrate database pls. explain me in steps. > > I'll really appreciate your kind help. > Pls. reply me soon, it's urgent!!! > > Thanks..so much > Sonika You are missing a lot of letters in your typing. You m...

Dump Database / Porting Database to Oracle
Hi, I do not know much about informix. I have no running instance here, but its my job to transfer data from informix to oracle. - Is there a tool to dump a whoole database into an ASCII file? - In mysql there is a tool called "mysqldump" which dumps the whole database and create a file with sql statements (create, insert , insert ...), so you can transfer the whole database with mysqldump >file.sql and mysql <file.sql. Is there such a tool in informix? (I have seen that there is a infdump direcorty) - Is there a good way or a (cheap) tool to transfer a database from infomix to oracle? Best Regards Sven Anders, Digitec GmbH Sven Anders wrote: > Hi, > I do not know much about informix. I have no running instance here, > but its my job to transfer data from informix to oracle. > > - Is there a tool to dump a whoole database into an ASCII file? > > - In mysql there is a tool called "mysqldump" which dumps the whole > database and create a file with sql statements (create, insert , > insert ...), so > you can transfer the whole database with mysqldump >file.sql and mysql > <file.sql. Is there such a tool in informix? (I have seen that there > is a > infdump direcorty) dbexport > - Is there a good way or a (cheap) tool to transfer a database from > infomix to oracle? dbexport? -- Ciao, The Obnoxious One "Ogni uomo mi guarda come se fossi ...

How to write shell script to export oracle 8.1.6 database to import oracle 10g database
Hi, I am trying to write or understand the conj process for export the data from oracle 8i to oracle 10g database. I have constraint that the previous loaded data into oracle10g should not be deleted or replace by newly imported database. Regards Nikhil ...

How do I check which Oracle Patches are appplied to an Oracle DataBase Server
I would like to know if someone could help me with this topic, I'm traying to analize and chek which Oracle Patches are appplied to an Oracle DataBase Server >chek which Oracle Patches are appplied to an Oracle DataBase Server If they were applied with oPatch, then "opatch lsinventory" should show you what's there. If you did not use oPatch, then I have no clue. ;-) BD wrote: > >chek which Oracle Patches are appplied to an Oracle DataBase Server > > If they were applied with oPatch, then "opatch lsinventory" should show > you what's there. > > If you did not use oPatch, then I have no clue. ;-) For better or worse ( we know the answer ) opatch is the tool that install patches from oracle support. "If you did not use oPatch" ... what? >"If you did not use oPatch" ... what? In my 8i environments, there are separate scripts that are run - for example, for a cpu, it would be 'install_cpu.sh'. In those cases, I really don't know how (if) one can confirm which patches have been applied to an environment. ...

Database Database Database Software Cheap
Database Database Database Database Software Cheap Great Datase Software See Website Below. Ultra Easy to Learn (Typically 30 Seconds) Professional Programmable Database Ver. 2.3 2.1 Million Record Capacity, (New cond). Search Rate: 2000 / Records / Second. DataBase Type: Random Access. Can Create Unlimited Databases. Programmable fields for any Application. Has Six Seperate Field Sets All Programmable. Build Time One Second, (Auto Creates DB). Setup Time: Instantly, Just Enter DB Name. Ultra Cheap Price, Special $20, Paypal Accepted. Application Mailed Instantly (file Attached Email). http://www.vehiclerepair.org/dbPro/dbpro.html ...

How to copy table from oracle database to sqlserver database ?
Hello, I need to copy a table from an 8i oracle database to a sqlserver 2000 database. Is it possible to use the command "COPY FROM ... TO ..." ? So, what is the correct syntax ? Thanks for your help Cyril On 10 Aug 2004, jewelk@free.fr wrote: > Hello, > > I need to copy a table from an 8i oracle database to a > sqlserver 2000 database. A few options exist. If this is one-off, just use sqlldr to drop the data to a file and then bcp to get it into SQLServer. > Is it possible to use the command "COPY FROM ... TO ..." ? > So, what is the correct syntax ? Well, I'm sure SQLServer has connectivity to Oracle? If you want to go this route, use that and do this from SQLServer. -- Galen Boyer On 10 Aug 2004 07:15:16 -0700, jewelk@free.fr (Cyril) wrote: >Hello, > >I need to copy a table from an 8i oracle database to a sqlserver 2000 database. > >Is it possible to use the command "COPY FROM ... TO ..." ? >So, what is the correct syntax ? > > >Thanks for your help > >Cyril Read up on using the Heterogeneous Gateway to Sqlserver. -- Sybrand Bakker, Senior Oracle DBA "Cyril" <jewelk@free.fr> wrote in message news:cd38c3d6.0408100615.6371b40e@posting.google.com... > Hello, > > I need to copy a table from an 8i oracle database to a sqlserver 2000 database. > > Is it possible to use the command "...

How to copy table from oracle database to sqlserver database ?
Hello, I need to copy a table from an 8i oracle database to a sqlserver 2000 database. Is it possible to use the command "COPY FROM ... TO ..." ? So, what is the correct syntax ? Thanks for your help Cyril "Cyril" <jewelk@free.fr> wrote in message news:cd38c3d6.0408100617.6f7b9f3e@posting.google.com... > Hello, > > I need to copy a table from an 8i oracle database to a sqlserver 2000 database. > > Is it possible to use the command "COPY FROM ... TO ..." ? > So, what is the correct syntax ? > > > Thanks for your help > &...

Re: Dump Database / Porting Database to Oracle #2
Tsutomu Ogiwara wrote: > Hi Sven. > > Try the following command. > > dbexport <your database>. > Then creating current directry <your database.dbs> I think you mean it creates a directory called <your database>.exp. > .unl is ASCII file One per table. > .sql is create table script. But no LOAD commands, as they are handled my dbimport into Informix. So you will need to modify the script to load the data for you into Oracle. Cheers, -- Mark. +----------------------------------------------------------+-----------+ | Mark D. Stock mailto:mdstock@MydasSolutions.com |//////// /| | Mydas Solutions Ltd http://MydasSolutions.com |///// / //| | +-----------------------------------+//// / ///| | |We value your comments, which have |/// / ////| | |been recorded and automatically |// / /////| | |emailed back to us for our records.|/ ////////| +----------------------+-----------------------------------+-----------+ sending to informix-list ...

Re: Dump Database / Porting Database to Oracle #3
Hi Mark. >>Then creating current directry <your database.dbs> > >I think you mean it creates a directory called <your database>.exp. Yes, thanks. -- Tsutomu Ogiwara from Tokyo Japan. ICQ#:168106592 _________________________________________________________________ Tired of spam? Get advanced junk mail protection with MSN 8. http://join.msn.com/?page=features/junkmail sending to informix-list ...

Can you import an Oracle 10G database into a 9I database
Good afternoon, I am curious if it is possible to import an Oracle 10g database into an oracle 9i instance. Any suggestions/thoughts? Thanks, Chris Boerman cboerman@comcast.net schreef: > Good afternoon, > > I am curious if it is possible to import an Oracle 10g database into > an oracle 9i instance. Any suggestions/thoughts? > > Thanks, > Chris Boerman > Of course. It's called downgrading. Or, if you do not want to downgrade, only if you're: 1) skillful 2) lucky You must be lucky, or I would not be answering this. Ever read the "New Features since 9i" chapters on 10G? Why do you think these are called "new"? So, export, using the 9i tool - if you're lucky, that works. Import, using the 9i tool - if you're lucky, that works. -- Regards, Frank van Bortel Top-posting is one way to shut me up... On Mon, 16 Apr 2007 20:27:32 +0200, Frank van Bortel <frank.van.bortel@gmail.com> wrote: >cboerman@comcast.net schreef: >> Good afternoon, >> >> I am curious if it is possible to import an Oracle 10g database into >> an oracle 9i instance. Any suggestions/thoughts? >> >> Thanks, >> Chris Boerman >> > >Of course. It's called downgrading. > >Or, if you do not want to downgrade, only if you're: >1) skillful >2) lucky > >You must be lucky, or I would not be answering this. >Ever read the "New Features since 9i"...

Oracle databases on a server
I have Oracle installation on a SUN UNIX server. I tappears that it is running Oracle 10.2 and Solaris 8. I want to find out how many databases are installed on this server. Would it be true to say that all databases installed on this server are listed in tnsnames.ora where the 'HOST' entry points to this server? On Feb 20, 9:00=A0am, p...@qantas.com.au wrote: > I have Oracle installation on a SUN UNIX server. I tappears that it is > running Oracle 10.2 and Solaris 8. > > I want to find out how many databases are installed on this server. > > Would it be true to say that all databases installed on this server > are listed in tnsnames.ora where the 'HOST' entry points to this > server? No, these might be different *services* served by the same instance. The first place to look at is /var/opt/oracle/oratab file, which should list all Oracle instances on your host. If your Oracle installation follows OFA (Optimal Flexible Architecture,) which is usually true, count $ORACLE_BASE/admin/<dbname> directories - each database should have its own directory under admin. Alternatively, you can also count spfile<SID>.ora files in $ORACLE_HOME/dbs directory. Hth, Vladimir M. Zakharychev N-Networks, makers of Dynamic PSP(tm) http://www.dynamicpsp.com On 20 Feb, 06:00, p...@qantas.com.au wrote: > I have Oracle installation on a SUN UNIX server. I tappears that it is > running Oracle 10.2 and Solaris 8. > > I want...

update oracle database when ever changes are made in access database
Hi all, well i have migrated the database from access to oracle and changes would be done in access only so i want to make same changes in the oracle database say after 10 mins. or so everytime i.e a continuous process either changes made or not so what should i do. "Varun" <varuns123@ggn.hcltech.com> wrote in message news:a76d8c39.0407272029.5df9d058@posting.google.com... > Hi all, > well i have migrated the database from access to oracle and changes > would be done in access only so i want to make same changes in the > oracle database say after 10 mins. or so everytime i.e a continuous > process either changes made or not so what should i do. Get rid of the Access database, just use it as the GUI. Jim "Jim Kennedy" <kennedy-downwithspammersfamily@attbi.net> wrote in message news:<A3GNc.174982$IQ4.4937@attbi_s02>... > "Varun" <varuns123@ggn.hcltech.com> wrote in message > news:a76d8c39.0407272029.5df9d058@posting.google.com... > > Hi all, > > well i have migrated the database from access to oracle and changes > > would be done in access only so i want to make same changes in the > > oracle database say after 10 mins. or so everytime i.e a continuous > > process either changes made or not so what should i do. > Get rid of the Access database, just use it as the GUI. > Jim but the problem is that the actual application that we are enhancing is using access and ...

moving oracle database server into a new server
Hi all I will be moving oracle from one Solaris server into another Solaris server, the IP address and box name will be changed. Oracle version is 8.1.7 and O/S is 5.8 on both servers ( just different O/S patch). Does any one know what I need to do in order to get oracle up and running again on the new server. I know that I will need to change the tnsnames.ora and listner.ora, but is there anything else that I need to do? Thanks Teresa Teresa wrote: > Hi all > > I will be moving oracle from one Solaris server into another Solaris > server, the IP address and box name will be changed. Oracle version is > 8.1.7 and O/S is 5.8 on both servers ( just different O/S patch). > Does any one know what I need to do in order to get oracle up and > running again on the new server. I know that I will need to change the > tnsnames.ora and listner.ora, but is there anything else that I need > to do? > > Thanks > Teresa When I do this with windows boxes I either: #1 Install oracle on the new box, create a new database and export/import from old db to new db #2 Copy datafiles, init, sqlnet, tns.. etc over from coldbackup or shutdown old_database, startup nomount and recreate the controlfile to fit new environment. #3 Restore from online backup of old_database and apply redo logs, again recreating the controlfile to fit the new env. (I usually only do this for "practice" and for creating a development copy of a prod db) #2 ...

Oracle 9i Database Server on Windows 2003 Server
I am wondering if we can install this Server on Windows 2003 Server without any problems. Is there anybody that did that before? Thanks in advance if someone know the answer. Yvon Bouchard, B. Sc. Computer Science Teacher http://info.cegepat.qc.ca/siteyvon I've installed it on windows 2003 enterprise ed. about 1 month ago without any problem. I've installed the version 9.2.0.1 with the patch to 9.2.0.3. 9.2.0.3 is the only version certified by oracle for win 2003. Ciao "Yvon Bouchard" <yvon.bouchard@cablevision.qc.ca> ha scritto nel messaggio news:QCLvb.21605$ZF1.2102185@news20.bellglobal.com... > I am wondering if we can install this Server on Windows 2003 Server without > any problems. > Is there anybody that did that before? > Thanks in advance if someone know the answer. > > Yvon Bouchard, B. Sc. Computer Science > Teacher > http://info.cegepat.qc.ca/siteyvon > > > Yvon Bouchard wrote: > > I am wondering if we can install this Server on Windows 2003 Server without > any problems. > Is there anybody that did that before? > Thanks in advance if someone know the answer. > Although I have not installed on Windows 2003 myself, I know of several such installations and all occurred without issue. I do note that Oracle has a separate download (CD set) for the WIndows 2003 compared to NT/2000/XP, so I suspect the install for the latter may not be ...

SN#12788 Java[TM] DataBase Connectivity API Features in Oracle 10g Database
SYSTEM NEWS FOR SUN USERS Vol 74 Issue 2 2004-04-12 Article 12788 from section "Developer's Section" Tutorial on Concepts and Usage The Oracle Technology Network provides a tutorial that outlines the features of Java[TM] DataBase Connectivity (JDBC[TM]) technology in relation to the new enhancements that they bring to the Oracle 10g database. The tutorial evaluates Web RowSets, Connection Cache, and Named Parameters, IEEE datatypes -- BINARY_DOUBLE and BINARY_FLOAT, and PL/SQL Index-by Tables. Each section in the tutorial covers topics such as concepts, design, required software, setup and application usage. Details at http://sun.systemnews.com/74/2/opt-dev/?12788#12788 Have a custom version of 'System News for Sun Users' delivered to you via email each week in PDF, text or HTML. Only the sections that you select will be included in your copy of the news magazine. Subscribe at http://sun.systemnews.com/subscribe (c) 2004 System News, Inc. http://www.systemnews.com ...

Oracle database works properly but from designers tools I can't connect to database
I've installed Oracle 10G on my computer. I can log in using system account. by plsql. I've installed designer and tried to log from designers plsql to the same databse also using sys account - I could not't, I got ora-1017 error. What I'm doing wrong, why plsql from database installation connects me and one from designer installation disallow me from connecting ? I need databse just to run designer for entity and processes diagrams. I need Barkers notation so I can only use Oracle Designer because it's the only tool with Barkers notation. Hope to get some help from You, Jacek On Sat, 06 Dec 2008 13:10:05 -0800, Jacek Maria Jackowski wrote: > I've installed Oracle 10G on my computer. I can log in using system > account. by plsql. I've installed designer and tried to log from > designers plsql to the same databse also using sys account - I could > not't, I got ora-1017 error. What I'm doing wrong, why plsql from > database installation connects me and one from designer installation > disallow me from connecting ? I need databse just to run designer for > entity and processes diagrams. I need Barkers notation so I can only use > Oracle Designer because it's the only tool with Barkers notation. > > Hope to get some help from You, > Jacek [oracle@oracle16 ~]$ oerr ora 1017 01017, 00000, "invalid username/password; logon denied" // *Cause: // *Action: [oracle@oracle16 ~]$ Have you tried using...

Oracle database
crdb1.sql connect SYS/change_on_install as SYSDBA set echo on spool /app/oracle/product/9.0/assistants/dbca/logs/CreateDB.log startup nomount pfile="/app/oracle/admin/eqdev_9i/scripts/init.ora"; CREATE DATABASE eqdev_9i MAXINSTANCES 1 MAXLOGHISTORY 1 MAXLOGFILES 5 MAXLOGMEMBERS 5 MAXDATAFILES 100 DATAFILE '/ora2/oradata/eqdev_9i/system01.dbf' SIZE 80M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED UNDO TABLESPACE "RBS" DATAFILE '/ora2/oradata/eqdev_9i/rbs01.dbf' SIZE 50M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED CHARACTER SET US7ASCII NATIONAL CHARACTER SET AL16UTF16 LOGFILE GROUP 3 ('/ora2/oradata/eqdev_9i/redo03.log') SIZE 10M, GROUP 2 ('/ora2/oradata/eqdev_9i/redo02.log') SIZE 10M, GROUP 1 ('/ora2/oradata/eqdev_9i/redo01.log') SIZE 10M; spool off exit; crdb2.sql connect SYS/change_on_install as SYSDBA set echo on spool /app/oracle/product/9.0/assistants/dbca/logs/CreateDBFiles.log CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE '/ora2/oradata/eqdev_9i/temp01.dbf' SIZE 20M REUSE AUTOEXTEND OFF NEXT 640K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; CREATE TABLESPACE "INDX" LOGGING DATAFILE '/ora2/oradata/eqdev_9i/indx01.dbf' SIZE 15M REUSE AUTOEXTEND OFF NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL; CREATE TABLESPACE "USERS" LOGGING DATAFILE '/ora2/oradata/eqdev_9i/users01.dbf' SIZE 15M REUSE AUTOEXTEND OFF NEXT 1280K MAXSIZE UNLIMITED EXTEN...

Open a database from a database
I want to be able to open and run an Access database from within an already running one. The OpenDatabase command simply opens it but does not run the startup routine. Interestingly opening from Windows Explorer has a different meaning in that it opens and runs! What command enables me to open another database and get it to run its AutoExec macro, from code? Jim I don't know the answer, but I do know that when you use OpenDatabase the code in a startup form of that database does work. Martin "Jim Devenish" schreef in bericht news:0b4510ac-d415-4c2c-b200-7546aae178a4@gu8...

How to use icons (database, database backup, save, delete etc.) in Oracle forms 5? #2
Hello, Thanks for your reply. Could you plz inform me how should I set the environmental variable in order to attach the menus on buttons in an Oracle forms. Thanks Martin Martin wrote: > Hello, > > Thanks for your reply. > Could you plz inform me how should I set the environmental variable in > order to attach the menus on buttons in an Oracle forms. > > Thanks > > Martin You need to read the documentation. I say this because how to do it is operating system dependenat and partially because you need to invest a bit of effort on your own behalf and asking me to read the installation documentation for you is beyond the pale. -- Daniel Morgan http://www.outreach.washington.edu/ext/certificates/oad/oad_crs.asp http://www.outreach.washington.edu/ext/certificates/aoa/aoa_crs.asp damorgan@x.washington.edu (replace 'x' with a 'u' to reply) ...

tcl database connection for Oracle MySQL PostgreSQL Sybase "Microsoft SQL Server 2000 Desktop Engine" DB2 Interbase/Firebird SQLite Microsoft Access
Hello everyone, There are many people that are looking for/missing the support for their database within TCL. Somehow this was overlooked: http://sqlrelay.sourceforge.net/index.html It has native TCL API and supports most popular databases. One problem: It doesn't have production ready support on Windows, but for the lucky people who use Linux/Unix, it seems to be an excellent solution, while TDBC is being developed. One small correction, the TCL part of the db connection will work just fine on Windows connecting to sqlrelay running on Linux/Unix (which, in turn, can connect to the database running on Windows :) ). What doesn't have production ready support (according to developers) is sqlrelay running on Windows. ...

Web resources about - Convert SAP Oracle Database to IBM DB2 Database?? - comp.databases.oracle.server

Database - Wikipedia, the free encyclopedia
A database is an organized collection of data . The data are typically organized to model aspects of reality in a way that supports processes ...

Database - Wikipedia, the free encyclopedia
... requiring information. For example, modelling the availability of rooms in hotels in a way that supports finding a hotel with vacancies. Database ...

Fearing no punishment, Denver cops abuse crime databases for personal gain
(credit: Noel Hidalgo ) Denver police officers performed searches on state and federal criminal justice databases that were not work-related ...

Open source database improves protection and performance
Most enterprises rely on databases in some form or another, but they can be vulnerable to attack from people looking to steal information. They ...

Seattle’s Tableau Software snaps up database-computing startup in Germany
Seattle’s Tableau Software has acquired HyPer, a database-computing startup that spun out of research at a university in Munich, Germany. As ...

NTTC's Liquid Products Database making steady progress
NTTC's Liquid Products Database making steady progress Modern Bulk Transporter NTTC is asking carriers to suggest “proprietary blends” for ...

Amazon Web Services Announces that over 1,000 Databases Have Migrated to AWS since January 1, 2016
Amazon Web Services, Inc. (AWS), today announced that the AWS Database Migration Service is now generally available.

Taiwan launches database on areas vulnerable to quake damage
BEIJING (AP) — Earthquake-prone Taiwan is launching a database to inform residents which areas might be susceptible to creating potentially catastrophic ...

Interior Dept Spent $15 Million On A Crime Database That Doesn’t Work
Interior Dept Spent $15 Million On A Crime Database That Doesn’t Work

Microsoft's SQL database software now runs on Linux
Remember when Steve Ballmer likened Linux to cancer, and the notion of Microsoft courting the open source crowd was virtually unimaginable? The ...

Resources last updated: 3/19/2016 10:47:42 PM