f



Very high 'log file sync' wait time with no 'log file parallel write' wait time

Oracle 10.2.0.2 SE on Windows 2003 SP1

The following trace file section is from a very slow import session
which is importing 9 Million rows into the database.

COMMIT

call     count       cpu    elapsed       disk      query    current
    rows
------- ------  -------- ---------- ---------- ---------- ----------
----------
Parse        0      0.00       0.00          0          0          0
       0
Execute 179802      4.90     474.73          0          0          0
       0
Fetch        0      0.00       0.00          0          0          0
       0
------- ------  -------- ---------- ---------- ---------- ----------
----------
total   179802      4.90     474.73          0          0          0
       0

Misses in library cache during parse: 0
Parsing user id: 21     (recursive depth: 1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total
Waited
  ----------------------------------------   Waited  ----------
------------
  log file sync                              179077        0.53
468.19

********************************************************************************
***Snipped***
********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current
    rows
------- ------  -------- ---------- ---------- ---------- ----------
----------
Parse        0      0.00       0.00          0          0          0
       0
Execute     17     14.64      81.21          1       6176     210055
  184773
Fetch        0      0.00       0.00          0          0          0
       0
------- ------  -------- ---------- ---------- ---------- ----------
----------
total       17     14.64      81.21          1       6176     210055
  184773

Misses in library cache during parse: 0
Misses in library cache during execute: 1

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total
Waited
  ----------------------------------------   Waited  ----------
------------
  log file sync                                  18        0.00
 0.02
  SQL*Net more data from client               20641        0.00
 0.34
  SQL*Net message to client                      32        0.00
 0.00
  SQL*Net message from client                    32        3.75
17.60
  log file switch completion                      1        0.10
 0.10


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current
    rows
------- ------  -------- ---------- ---------- ---------- ----------
----------
Parse       15      0.01       0.00          0          0          1
       0
Execute 179817      4.90     474.74          0         40         10
      10
Fetch       15      0.00       0.00          0         25          0
      10
------- ------  -------- ---------- ---------- ---------- ----------
----------
total   179847      4.92     474.74          0         65         11
      20

Misses in library cache during parse: 3
Misses in library cache during execute: 3

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total
Waited
  ----------------------------------------   Waited  ----------
------------
  log file sync                              179075        0.53
468.19

I am using the following options for import:

imp sysadm/<pwd> file=D:\temp\SAND_RPTD.DMP
log=D:\temp\IMP_SAND_RPTD.log tables=(PS_TL_RPTD_TIME) buffer=20000000
ignore=y statistics=none commit=y indexes=y

the import session is committing approximately every 10,000 rows (I can
confirm this by doing a count(*) from the table)

I am seeing approximately one 'log file sync' wait for every row
inserted into the table:

SQL> select total_waits from v$session_event where event = 'log file
sync'
  2  and sid=94
  3  union select count(*) from ps_tl_rptd_time
  4  ;

TOTAL_WAITS
-----------
    4891689
    4901919

but we have had no significant waits for 'log file parallel write'
since instance startup.

What is causing this huge amount of 'log file sync' activity..?  The
10046 trace file seems to suggest that we are committing for every row
in the table but the count(*) is increasing in line with the 'buffer'
parameter from the import command.

Is this a bug..?


Matt

0
mccmx (307)
11/24/2006 1:52:25 PM
comp.databases.oracle.server 22978 articles. 1 followers. Post Follow

25 Replies
1813 Views

Similar Articles

[PageSpeed] 48

On Fri, 24 Nov 2006 05:52:25 -0800, mccmx wrote:

> 
> What is causing this huge amount of 'log file sync' activity..?  The
> 10046 trace file seems to suggest that we are committing for every row
> in the table but the count(*) is increasing in line with the 'buffer'
> parameter from the import command.
> 
> Is this a bug..?

Matt, how big is your log buffer? Log writer will write
1) Every 3 seconds
2) When there is more then 1M in log buffer
3) When your log buffer is more then 1/3 full
4) When "commit" is issued.

The 10046 trace suggests that you are waiting for log file sync, which
means that you're writing to the log file frequently. If your log buffer
is too small, it is possible that you're writing after every row. Also,
there is a possibility that import will write more then 1M of redo
entries, if your dump file contains a LOB field. In that case,
buffer/commit combination in the import parameters will not help you much.
Have you checked V$SESSION_EVENT and V$SESS_TIME_MODEL? V$SESS_TIME_MODEL
is something that gives you the 
Also, this is probably a stupid question, but how frequently are you
performing  checkpoints and log switches?

-- 
http://www.mladen-gogala.com

0
11/24/2006 5:40:00 PM
Hi Matt,

I would confirm the number of commits by looking at v$sesstat, rather
than just doing a count(*) from the table.

If it does look you are comitting more frequently than you thought,
then the following is something else to check, from the doco...

"For tables containing LOBs or LONG, BFILE, REF, ROWID,UROWID, or
TIMESTAMP columns, rows are inserted individually. The size of the
buffer must be large enough to contain the entire row, except for LOB
and LONG columns. If the buffer cannot hold the longest row in a table,
Import attempts to allocate a larger buffer."

Does your table have any of these datatypes?

HTH,

Steve

0
stevedhoward (759)
11/25/2006 1:55:42 PM
> Matt, how big is your log buffer? Log writer will write

2Mb

> 1) Every 3 seconds

We commited over 170,000 times in the space of a few minutes..:

Execute 179,802      4.90     474.73          0          0          0

> 2) When there is more then 1M in log buffer

The average row length of the table is only 200 bytes

> 3) When your log buffer is more then 1/3 full

The average row length of the table is only 200 bytes

> 4) When "commit" is issued.

It looks like the database is issuing the commit command for every row
- but we are not using any of the datatypes which force this behaviour
(e.g. LOBs etc)

Matt

0
mccmx (307)
11/27/2006 12:30:11 PM
mccmx@hotmail.com wrote:
> > Matt, how big is your log buffer? Log writer will write
>
> 2Mb
>
> > 1) Every 3 seconds
>
> We commited over 170,000 times in the space of a few minutes..:
>
> Execute 179,802      4.90     474.73          0          0          0
>
> > 2) When there is more then 1M in log buffer
>
> The average row length of the table is only 200 bytes
>
> > 3) When your log buffer is more then 1/3 full
>
> The average row length of the table is only 200 bytes
>
> > 4) When "commit" is issued.
>
> It looks like the database is issuing the commit command for every row
> - but we are not using any of the datatypes which force this behaviour
> (e.g. LOBs etc)
>
> Matt

Why are you specifying "commit=y" for imp?  If you set commit=n, a
commit will be performed after each object is fully imported, rather
than after the number of rows that will fit into the buffer
(buffer=20000000 specification)  The special cases outlined by Steve
Howard  still apply.

See:
http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:10094378836151

Charles Hooper
PC Support Specialist
K&M Machine-Fabricating, Inc.

0
hooperc2000 (791)
11/27/2006 12:54:08 PM
> "For tables containing LOBs or LONG, BFILE, REF, ROWID,UROWID, or
> TIMESTAMP columns, rows are inserted individually. The size of the
> buffer must be large enough to contain the entire row, except for LOB
> and LONG columns. If the buffer cannot hold the longest row in a table,
> Import attempts to allocate a larger buffer."
>
> Does your table have any of these datatypes?

No,

SQL> select distinct data_type from user_tab_columns where table_name =

'PS_TL_RPTD_TIME';

DATA_TYPE
--------------------------------------------------------------------------------

NUMBER
DATE
VARCHAR2

0
mccmx (307)
11/27/2006 12:54:54 PM
mccmx@hotmail.com wrote:
> > "For tables containing LOBs or LONG, BFILE, REF, ROWID,UROWID, or
> > TIMESTAMP columns, rows are inserted individually. The size of the
> > buffer must be large enough to contain the entire row, except for LOB
> > and LONG columns. If the buffer cannot hold the longest row in a table,
> > Import attempts to allocate a larger buffer."
> >
> > Does your table have any of these datatypes?
>
> No,
>
> SQL> select distinct data_type from user_tab_columns where table_name =
>
> 'PS_TL_RPTD_TIME';
>
> DATA_TYPE
> --------------------------------------------------------------------------------
>
> NUMBER
> DATE
> VARCHAR2

I could not dupllicate this in my imp test case (3MB table, buffer of
100K, ~504 byte row size suggesting about 190-200 rows per array
insert, 10.2.0.1 on Windows XP Pro).

What I did see is something similar to the following...

-------------execute as the buffer has filled (190
rows)...---------------------------

EXEC
#5:c=10015,e=3722,p=0,cr=23,cu=154,mis=0,r=190,dep=0,og=1,tim=4640523894
WAIT #5: nam='SQL*Net message to client' ela= 2 driver id=1111838976
#bytes=1 p3=0 obj#=-1 tim=4640523957
WAIT #5: nam='SQL*Net message from client' ela= 48 driver id=1111838976
#bytes=1 p3=0 obj#=-1 tim=4640524028

---------------Recursive commit for the 190 rows above--------------

XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 34749 buffer#=2961 p2=0 p3=0 obj#=-1
tim=4640559015
WAIT #0: nam='SQL*Net message to client' ela= 4 driver id=1111838976
#bytes=1 p3=0 obj#=-1 tim=4640559091
WAIT #0: nam='SQL*Net message from client' ela= 565 driver
id=1111838976 #bytes=1 p3=0 obj#=-1 tim=4640559684

---------------Start the next 190 row
chunk---------------------------------

I'm not sure if it would help, but the raw trace file may show
something that the aggregated tkprof doesn't.  What does it look like
sequentially?  I can't even imagine, because the non-recursive section
of your trace shows that the recursive INSERT statement was executed 17
times for 184773 rows, or 10,869 rows per execution, which syncs up
with your 20MB buffer and 200 bytes/row.  If there is a recursive
commit in there for every row, I would  like to see that raw trace
file.

Regards,

Steve

0
stevedhoward (759)
11/27/2006 3:38:59 PM
mccmx@hotmail.com wrote:
> > "For tables containing LOBs or LONG, BFILE, REF, ROWID,UROWID, or
> > TIMESTAMP columns, rows are inserted individually. The size of the
> > buffer must be large enough to contain the entire row, except for LOB
> > and LONG columns. If the buffer cannot hold the longest row in a table,
> > Import attempts to allocate a larger buffer."
> >
> > Does your table have any of these datatypes?
>
> No,
>
> SQL> select distinct data_type from user_tab_columns where table_name =
>
> 'PS_TL_RPTD_TIME';
>
> DATA_TYPE
> --------------------------------------------------------------------------------
>
> NUMBER
> DATE
> VARCHAR2

There is also some datatype missing from the docs, but none of those
(unless some new silly bug confuses date with timestamp...).

>From metalink Note:223117.1:

"If 'log file parallel write' is significantly different i.e smaller,
  then the delay is caused by the other parts of the Redo Logging
mechanism
  that occur during a COMMIT/ROLLBACK (and are not I/O-related).
  Sometimes there will be latch contention on redo latches, evidenced
by
  'latch free' or 'LGWR wait for redo copy' wait events. "

Have you tried taking out the commit parameter?  If there's something
screwy about the array processing that would make a noticeable
difference.

Also, from Note:125269.1:

"Over time (since database startup), if you see increasing values for:

     select * from v$sysstat where name like 'redo%space%';

   your redo log buffer is too small."

In other words, you probably don't have an I/O problem, but some sort
of memory or latch contention from all the commits.

jg
-- 
@home.com is bogus.
http://www.spamhaus.org/news.lasso?article=161

0
joel-garry (4553)
11/27/2006 11:55:09 PM
> Why are you specifying "commit=y" for imp?  If you set commit=n, a
> commit will be performed after each object is fully imported, rather
> than after the number of rows that will fit into the buffer
> (buffer=20000000 specification)  The special cases outlined by Steve
> Howard  still apply.

To avoid unneccessarily increasing the undo tablespace....

Matt

0
mccmx (307)
11/28/2006 10:59:52 AM
I just see thousands of EXEC calls for COMMIT, each with an associated
'log file sync' wait:

PARSING IN CURSOR #9 len=6 dep=1 uid=21 oct=44 lid=21 tim=276327492
hv=255718823 ad='0'
COMMIT
END OF STMT
EXEC #9:c=0,e=2140,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276327489
WAIT #9: nam='log file sync' ela= 1848 buffer#=3637 p2=0 p3=0 obj#=0
tim=276329947
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1983,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276330067
WAIT #9: nam='log file sync' ela= 1708 buffer#=3639 p2=0 p3=0 obj#=0
tim=276331899
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1848,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276332033
WAIT #1: nam='SQL*Net more data from client' ela= 27 driver
id=1111838976 #bytes=2 p3=0 obj#=0 tim=276332165
WAIT #9: nam='log file sync' ela= 1883 buffer#=3641 p2=0 p3=0 obj#=0
tim=276334144
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=2011,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276334265
WAIT #9: nam='log file sync' ela= 1789 buffer#=3643 p2=0 p3=0 obj#=0
tim=276336184
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=2174,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276336561
WAIT #9: nam='log file sync' ela= 1767 buffer#=3645 p2=0 p3=0 obj#=0
tim=276338452
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1896,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276338574
WAIT #9: nam='log file sync' ela= 1766 buffer#=3647 p2=0 p3=0 obj#=0
tim=276340465
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1896,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276340589
WAIT #9: nam='log file sync' ela= 1981 buffer#=3649 p2=0 p3=0 obj#=0
tim=276342693
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=2109,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276342814
WAIT #9: nam='log file sync' ela= 1646 buffer#=3651 p2=0 p3=0 obj#=0
tim=276344584
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1776,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276344707
WAIT #9: nam='log file sync' ela= 1696 buffer#=3653 p2=0 p3=0 obj#=0
tim=276346527
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1822,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276346647
WAIT #9: nam='log file sync' ela= 1740 buffer#=3655 p2=0 p3=0 obj#=0
tim=276348508
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1870,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276348632
WAIT #9: nam='log file sync' ela= 1781 buffer#=3657 p2=0 p3=0 obj#=0
tim=276350538
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1992,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276350741
WAIT #1: nam='SQL*Net more data from client' ela= 19 driver
id=1111838976 #bytes=4 p3=0 obj#=0 tim=276351195
WAIT #9: nam='log file sync' ela= 1720 buffer#=3659 p2=0 p3=0 obj#=0
tim=276353054
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1862,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276353185
WAIT #9: nam='log file sync' ela= 1800 buffer#=3661 p2=0 p3=0 obj#=0
tim=276355113
XCTEND rlbk=0, rd_only=1
EXEC #9:c=0,e=1927,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276355232
WAIT #9: nam='log file sync' ela= 1968 buffer#=3663 p2=0 p3=0 obj#=0
tim=276357322
XCTEND rlbk=0, rd_only=1

Matt

0
mccmx (307)
11/28/2006 11:58:03 AM
On Tue, 28 Nov 2006 02:59:52 -0800, mccmx wrote:

> To avoid unneccessarily increasing the undo tablespace....
> 
> Matt

This is reasonable. That's why I am doing it, too. Have you tried with
buffers of 10 or 20 MB?

-- 
http://www.mladen-gogala.com

0
11/28/2006 1:58:32 PM
mccmx@hotmail.com wrote:
> Oracle 10.2.0.2 SE on Windows 2003 SP1
>
> The following trace file section is from a very slow import session
> which is importing 9 Million rows into the database.
>
> COMMIT
>
> call     count       cpu    elapsed       disk      query    current
>     rows
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> Parse        0      0.00       0.00          0          0          0
>        0
> Execute 179802      4.90     474.73          0          0          0
>        0
> Fetch        0      0.00       0.00          0          0          0
>        0
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> total   179802      4.90     474.73          0          0          0
>        0
>
> Misses in library cache during parse: 0
> Parsing user id: 21     (recursive depth: 1)
>
> Elapsed times include waiting on following events:
>   Event waited on                             Times   Max. Wait  Total
> Waited
>   ----------------------------------------   Waited  ----------
> ------------
>   log file sync                              179077        0.53
> 468.19
>
> ********************************************************************************
> ***Snipped***
> ********************************************************************************
>
> OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
>
> call     count       cpu    elapsed       disk      query    current
>     rows
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> Parse        0      0.00       0.00          0          0          0
>        0
> Execute     17     14.64      81.21          1       6176     210055
>   184773
> Fetch        0      0.00       0.00          0          0          0
>        0
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> total       17     14.64      81.21          1       6176     210055
>   184773
>
> Misses in library cache during parse: 0
> Misses in library cache during execute: 1
>
> Elapsed times include waiting on following events:
>   Event waited on                             Times   Max. Wait  Total
> Waited
>   ----------------------------------------   Waited  ----------
> ------------
>   log file sync                                  18        0.00
>  0.02
>   SQL*Net more data from client               20641        0.00
>  0.34
>   SQL*Net message to client                      32        0.00
>  0.00
>   SQL*Net message from client                    32        3.75
> 17.60
>   log file switch completion                      1        0.10
>  0.10
>
>
> OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
>
> call     count       cpu    elapsed       disk      query    current
>     rows
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> Parse       15      0.01       0.00          0          0          1
>        0
> Execute 179817      4.90     474.74          0         40         10
>       10
> Fetch       15      0.00       0.00          0         25          0
>       10
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> total   179847      4.92     474.74          0         65         11
>       20
>
> Misses in library cache during parse: 3
> Misses in library cache during execute: 3
>
> Elapsed times include waiting on following events:
>   Event waited on                             Times   Max. Wait  Total
> Waited
>   ----------------------------------------   Waited  ----------
> ------------
>   log file sync                              179075        0.53
> 468.19
>
> I am using the following options for import:
>
> imp sysadm/<pwd> file=D:\temp\SAND_RPTD.DMP
> log=D:\temp\IMP_SAND_RPTD.log tables=(PS_TL_RPTD_TIME) buffer=20000000
> ignore=y statistics=none commit=y indexes=y
>
> the import session is committing approximately every 10,000 rows (I can
> confirm this by doing a count(*) from the table)
>
> I am seeing approximately one 'log file sync' wait for every row
> inserted into the table:
>
> SQL> select total_waits from v$session_event where event = 'log file
> sync'
>   2  and sid=94
>   3  union select count(*) from ps_tl_rptd_time
>   4  ;
>
> TOTAL_WAITS
> -----------
>     4891689
>     4901919
>
> but we have had no significant waits for 'log file parallel write'
> since instance startup.
>
> What is causing this huge amount of 'log file sync' activity..?  The
> 10046 trace file seems to suggest that we are committing for every row
> in the table but the count(*) is increasing in line with the 'buffer'
> parameter from the import command.
>
> Is this a bug..?
>
>
> Matt

Are your logs on different mounts?  Are you on RAID-5.  Is it Direct
Path, looks like not.
I hope you don't have your windows page file on the same volume as the
redo.

0
Junk6218 (272)
11/28/2006 2:21:38 PM
mccmx@hotmail.com wrote:
> I just see thousands of EXEC calls for COMMIT, each with an associated
> 'log file sync' wait:
>
> PARSING IN CURSOR #9 len=6 dep=1 uid=21 oct=44 lid=21 tim=276327492
> hv=255718823 ad='0'
> COMMIT
> END OF STMT
> EXEC #9:c=0,e=2140,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276327489
> WAIT #9: nam='log file sync' ela= 1848 buffer#=3637 p2=0 p3=0 obj#=0
> tim=276329947
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1983,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276330067
> WAIT #9: nam='log file sync' ela= 1708 buffer#=3639 p2=0 p3=0 obj#=0
> tim=276331899
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1848,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276332033
> WAIT #1: nam='SQL*Net more data from client' ela= 27 driver
> id=1111838976 #bytes=2 p3=0 obj#=0 tim=276332165
> WAIT #9: nam='log file sync' ela= 1883 buffer#=3641 p2=0 p3=0 obj#=0
> tim=276334144
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=2011,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276334265
> WAIT #9: nam='log file sync' ela= 1789 buffer#=3643 p2=0 p3=0 obj#=0
> tim=276336184
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=2174,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276336561
> WAIT #9: nam='log file sync' ela= 1767 buffer#=3645 p2=0 p3=0 obj#=0
> tim=276338452
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1896,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276338574
> WAIT #9: nam='log file sync' ela= 1766 buffer#=3647 p2=0 p3=0 obj#=0
> tim=276340465
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1896,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276340589
> WAIT #9: nam='log file sync' ela= 1981 buffer#=3649 p2=0 p3=0 obj#=0
> tim=276342693
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=2109,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276342814
> WAIT #9: nam='log file sync' ela= 1646 buffer#=3651 p2=0 p3=0 obj#=0
> tim=276344584
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1776,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276344707
> WAIT #9: nam='log file sync' ela= 1696 buffer#=3653 p2=0 p3=0 obj#=0
> tim=276346527
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1822,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276346647
> WAIT #9: nam='log file sync' ela= 1740 buffer#=3655 p2=0 p3=0 obj#=0
> tim=276348508
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1870,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276348632
> WAIT #9: nam='log file sync' ela= 1781 buffer#=3657 p2=0 p3=0 obj#=0
> tim=276350538
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1992,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276350741
> WAIT #1: nam='SQL*Net more data from client' ela= 19 driver
> id=1111838976 #bytes=4 p3=0 obj#=0 tim=276351195
> WAIT #9: nam='log file sync' ela= 1720 buffer#=3659 p2=0 p3=0 obj#=0
> tim=276353054
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1862,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276353185
> WAIT #9: nam='log file sync' ela= 1800 buffer#=3661 p2=0 p3=0 obj#=0
> tim=276355113
> XCTEND rlbk=0, rd_only=1
> EXEC #9:c=0,e=1927,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=276355232
> WAIT #9: nam='log file sync' ela= 1968 buffer#=3663 p2=0 p3=0 obj#=0
> tim=276357322
> XCTEND rlbk=0, rd_only=1
>
> Matt

Matt,

One item of interest is that all of the transaction end markers
(commits) are read only (XCTEND rlbk=0, rd_only=1).  This is the case
when a commit is issued but involves no work.  You can reproduce this
by tracing a session in which you update a row (even to its current
value), and then issue two commits.   The first commit will show
rd_only = 0, while the second will show rd_only = 1.

This explains why you are only seeing the updates in the table every
10,000 rows.  The previous 9,999 commits are read only, i.e., involve
no "real" change.

This may also explain why they are showing up under the recursive
section of the trace.  It seems that I read somewhere once before that
you can see this behaviour with LMT's, as it represents the "internal"
commits performed by the software for space management operations, as
redo is involved for these operations.  No dictionary tables are
updated, but a "commit" is performed when the bitmap is changed.  I
cannot find the source right now, and maybe someone else knows
differently, but I would be curious to see if you can reproduce it when
the space is already allocated, i.e., load the table, delete, and then
reload so all the extents are already allocated.  I realize that is a
lot of work for something that I cannot say for certainty is the
problem, but the fact they are all read only leads me in at least some
"under the hood" recursive direction.

HTH,

Steve

0
stevedhoward (759)
11/28/2006 4:12:53 PM
> Are your logs on different mounts?  Are you on RAID-5.  Is it Direct
> Path, looks like not.
> I hope you don't have your windows page file on the same volume as the
> redo.

What makes you so sure it is an I/O problem - I've already said that we
are seeing no waits for 'log file parallel write', which suggest that
log file I/O is not the problem..

Matt

0
mccmx (307)
11/29/2006 7:06:04 AM
I haven't followed this closely, but if you're
seeing lots of waits for log file sync, but
no waits for log file writes, then

a) There used to be a bug in 9i which did not
report the timing of log file writes properly -
it's possible that this has slipped into your
version of 10g.

OR

b) somehow your log file writer is using
a variant of asynchronous I/O, so the log
writer is dispatching a write request and
sending an ACK to the front-end without
having to wait for the write to complete.


-- 
Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com

Author: Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


<mccmx@hotmail.com> wrote in message 
news:1164783964.591310.257610@16g2000cwy.googlegroups.com...
>> Are your logs on different mounts?  Are you on RAID-5.  Is it Direct
>> Path, looks like not.
>> I hope you don't have your windows page file on the same volume as the
>> redo.
>
> What makes you so sure it is an I/O problem - I've already said that we
> are seeing no waits for 'log file parallel write', which suggest that
> log file I/O is not the problem..
>
> Matt
> 


0
jonathan5683 (1392)
11/29/2006 8:46:58 AM
mccmx@hotmail.com wrote:
> Oracle 10.2.0.2 SE on Windows 2003 SP1
>
> The following trace file section is from a very slow import session
> which is importing 9 Million rows into the database.
>
> COMMIT
>
> call     count       cpu    elapsed       disk      query    current
>     rows
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> Parse        0      0.00       0.00          0          0          0
>        0
> Execute 179802      4.90     474.73          0          0          0
>        0
> Fetch        0      0.00       0.00          0          0          0
>        0
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> total   179802      4.90     474.73          0          0          0
>        0
>
> Misses in library cache during parse: 0
> Parsing user id: 21     (recursive depth: 1)
>
> Elapsed times include waiting on following events:
>   Event waited on                             Times   Max. Wait  Total
> Waited
>   ----------------------------------------   Waited  ----------
> ------------
>   log file sync                              179077        0.53
> 468.19
>
> ********************************************************************************
> ***Snipped***
> ********************************************************************************
>
> OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
>
> call     count       cpu    elapsed       disk      query    current
>     rows
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> Parse        0      0.00       0.00          0          0          0
>        0
> Execute     17     14.64      81.21          1       6176     210055
>   184773
> Fetch        0      0.00       0.00          0          0          0
>        0
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> total       17     14.64      81.21          1       6176     210055
>   184773
>
> Misses in library cache during parse: 0
> Misses in library cache during execute: 1
>
> Elapsed times include waiting on following events:
>   Event waited on                             Times   Max. Wait  Total
> Waited
>   ----------------------------------------   Waited  ----------
> ------------
>   log file sync                                  18        0.00
>  0.02
>   SQL*Net more data from client               20641        0.00
>  0.34
>   SQL*Net message to client                      32        0.00
>  0.00
>   SQL*Net message from client                    32        3.75
> 17.60
>   log file switch completion                      1        0.10
>  0.10
>
>
> OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
>
> call     count       cpu    elapsed       disk      query    current
>     rows
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> Parse       15      0.01       0.00          0          0          1
>        0
> Execute 179817      4.90     474.74          0         40         10
>       10
> Fetch       15      0.00       0.00          0         25          0
>       10
> ------- ------  -------- ---------- ---------- ---------- ----------
> ----------
> total   179847      4.92     474.74          0         65         11
>       20
>
> Misses in library cache during parse: 3
> Misses in library cache during execute: 3
>
> Elapsed times include waiting on following events:
>   Event waited on                             Times   Max. Wait  Total
> Waited
>   ----------------------------------------   Waited  ----------
> ------------
>   log file sync                              179075        0.53
> 468.19
>
> I am using the following options for import:
>
> imp sysadm/<pwd> file=D:\temp\SAND_RPTD.DMP
> log=D:\temp\IMP_SAND_RPTD.log tables=(PS_TL_RPTD_TIME) buffer=20000000
> ignore=y statistics=none commit=y indexes=y
>
> the import session is committing approximately every 10,000 rows (I can
> confirm this by doing a count(*) from the table)
>
> I am seeing approximately one 'log file sync' wait for every row
> inserted into the table:
>
> SQL> select total_waits from v$session_event where event = 'log file
> sync'
>   2  and sid=94
>   3  union select count(*) from ps_tl_rptd_time
>   4  ;
>
> TOTAL_WAITS
> -----------
>     4891689
>     4901919
>
> but we have had no significant waits for 'log file parallel write'
> since instance startup.
>
> What is causing this huge amount of 'log file sync' activity..?  The
> 10046 trace file seems to suggest that we are committing for every row
> in the table but the count(*) is increasing in line with the 'buffer'
> parameter from the import command.
>
> Is this a bug..?
>
>
> Matt

Matt,

have you considered using Data Pump (expdp/impdp) instead of exp/imp?

-bdbafh

0
bdbafh (710)
11/29/2006 4:32:45 PM
Jonathan Lewis wrote:
> I haven't followed this closely, but if you're
> seeing lots of waits for log file sync, but
> no waits for log file writes, then
>
> a) There used to be a bug in 9i which did not
> report the timing of log file writes properly -
> it's possible that this has slipped into your
> version of 10g.
>
> OR
>
> b) somehow your log file writer is using
> a variant of asynchronous I/O, so the log
> writer is dispatching a write request and
> sending an ACK to the front-end without
> having to wait for the write to complete.
>
>

Jonathan,

I continue to be curious about the origination of the "spurious"
commits.  If you look at the raw 10046 trace snippet Matt posted, they
are all read only.  What would cause this on an import?

Thanks,

Steve

0
stevedhoward (759)
11/29/2006 4:48:19 PM
mccmx@hotmail.com wrote:
> > Are your logs on different mounts?  Are you on RAID-5.  Is it Direct
> > Path, looks like not.
> > I hope you don't have your windows page file on the same volume as the
> > redo.
>
> What makes you so sure it is an I/O problem - I've already said that we
> are seeing no waits for 'log file parallel write', which suggest that
> log file I/O is not the problem..
>
> Matt
"Me!  I not sure of anything..."   It's is simply the first place I
start even when looking at a trace file.  Without knowing this info, I
can't be sure the issue isn't simpler than it seems.  I'm not sure why
you are saying that I/O is not the problem w/o log file parallel write.
 Better I/O could improve the sync performance.  Large frequent commits
and a too large log buffer can also be a cause.

0
Junk6218 (272)
12/1/2006 10:15:07 PM
EscVector wrote:
> mccmx@hotmail.com wrote:
> > > Are your logs on different mounts?  Are you on RAID-5.  Is it Direct
> > > Path, looks like not.
> > > I hope you don't have your windows page file on the same volume as the
> > > redo.
> >
> > What makes you so sure it is an I/O problem - I've already said that we
> > are seeing no waits for 'log file parallel write', which suggest that
> > log file I/O is not the problem..
> >
> > Matt
> "Me!  I not sure of anything..."   It's is simply the first place I
> start even when looking at a trace file.  Without knowing this info, I
> can't be sure the issue isn't simpler than it seems.  I'm not sure why
> you are saying that I/O is not the problem w/o log file parallel write.
>  Better I/O could improve the sync performance.  Large frequent commits
> and a too large log buffer can also be a cause.

I take it back, I do see why.. I'm sometimes over zealous in my rant
against RAID-5.
The question I should have asked is why not just increase UNDO.  What's
the issue with that?  UNDO is cheap compared to trying to debug why a
commit is happening when one should not during a imp with large buffer
and commit=y... Question, does the wait happen if the commit=y is
removed.  If not, nuff said... Up Undo, get it done.  For educational
purposes, then fine,  keep it going, probably a bug, open a tar.

0
Junk6218 (272)
12/2/2006 3:43:06 AM
EscVector wrote:

> The question I should have asked is why not just increase UNDO.  What's
> the issue with that?  UNDO is cheap compared to trying to debug why a
> commit is happening when one should not during a imp with large buffer
> and commit=y... Question, does the wait happen if the commit=y is
> removed.  If not, nuff said... Up Undo, get it done.  For educational
> purposes, then fine,  keep it going, probably a bug, open a tar.

Well, I don't know about the OP, but on one system, if I increased undo
to where a script tells me it "should be," I would have to quadruple
the tablespace from 10G to 40G.  Now, that system
has two days online full backups available, so that means an additional
90G.  I will be running some significant updates on that db within the
next month, and they generate enough archives to already require me to
get rid of one of the backups to have enough room, as the arcs take 3
days to drop off.  It's not cheap when your SAN is already maxed out
with disk.  It's inevitably a balancing act.  (Actually there is enough
unallocated disk to avoid getting rid of the backup, but enough other
things are going on, like physically moving a big chunk of the company
including all the servers, that the sysadmins are already overworked
and really shouldn't deal with any changes at this time, even if I did
it for them.)  So you really have to watch out for assuming what
"cheap" means.

jg
--
@home.com is bogus.
http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html

0
joel-garry (4553)
12/4/2006 10:24:49 PM
joel garry wrote:

> Well, I don't know about the OP, but on one system, if I increased undo
> to where a script tells me it "should be," I would have to quadruple
> the tablespace from 10G to 40G.  Now, that system
> has two days online full backups available, so that means an additional
> 90G.
> 
> jg
> --
> @home.com is bogus.
> http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html

Only if your backup is brain-dead. Aren't you using RMAN?

And just in case someone interprets that answer as meaning you should do
what the v$ tells you too ... are there any 1555s or other issues?
-- 
Daniel A. Morgan
University of Washington
damorgan@x.washington.edu
(replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
0
damorgan3 (6326)
12/5/2006 3:22:00 AM
DA Morgan wrote:
> joel garry wrote:
>
> > Well, I don't know about the OP, but on one system, if I increased undo
> > to where a script tells me it "should be," I would have to quadruple
> > the tablespace from 10G to 40G.  Now, that system
> > has two days online full backups available, so that means an additional
> > 90G.
> >
> > jg
> > --
> > @home.com is bogus.
> > http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html
>
> Only if your backup is brain-dead. Aren't you using RMAN?

I'm using 9i RMAN.  What about it?  It takes my 10G undo file (which
just now is using under 700M) and two other 2G files (which are using
about 650M)  and puts them into 1 11G piece.  I get about 70%
compression when I compress the pieces to an off-SAN device.

>
> And just in case someone interprets that answer as meaning you should do
> what the v$ tells you too ... are there any 1555s or other issues?

If I don't kill off leftover sessions nightly, yes.  undo retention is
10 hours.

jg
--
@home.com is bogus.
"...that's not how class-action litigation is supposed to work."
http://www.signonsandiego.com/uniontrib/20061205/news_1b5lerach.html

0
joel-garry (4553)
12/5/2006 7:48:05 PM
joel garry wrote:
> DA Morgan wrote:
> > joel garry wrote:
> >
> > > Well, I don't know about the OP, but on one system, if I increased undo
> > > to where a script tells me it "should be," I would have to quadruple
> > > the tablespace from 10G to 40G.  Now, that system
> > > has two days online full backups available, so that means an additional
> > > 90G.
> > >
> > > jg
> > > --
> > > @home.com is bogus.
> > > http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html
> >
> > Only if your backup is brain-dead. Aren't you using RMAN?
>
> I'm using 9i RMAN.  What about it?  It takes my 10G undo file (which
> just now is using under 700M) and two other 2G files (which are using
> about 650M)  and puts them into 1 11G piece.  I get about 70%
> compression when I compress the pieces to an off-SAN device.
>
> >
> > And just in case someone interprets that answer as meaning you should do
> > what the v$ tells you too ... are there any 1555s or other issues?
>
> If I don't kill off leftover sessions nightly, yes.  undo retention is
> 10 hours.
>
> jg
> --
> @home.com is bogus.
> "...that's not how class-action litigation is supposed to work."
> http://www.signonsandiego.com/uniontrib/20061205/news_1b5lerach.html

Your point is taken regarding "cheap", but I have found that the DBA
often buys into problems that they shouldn't such as having to work
around backup related disk shortages.  RMAN is a great tool, but even
it is limited when the database grows to sufficient size.  "Cheap in
this instance involves high commit rate and related log sync waits
debugging vs increasing undo and getting the data loaded and then
possibly resizing undo after it completes.  I suggest it would be
cheaper or optimal to simple increase undo in this situation.  A 9
million row insert( not knowing the actual row size) at 200 bytes per
row comes out to under 2gb.  So, if there are indexes, that would bump
up undo, but still, let's gestimate at 5gb more undo for this insert.
Is it worth the time in this situation?  Your situation/undo mileage
may vary.

0
Junk6218 (272)
12/5/2006 8:42:00 PM
EscVector wrote:
> joel garry wrote:
> > DA Morgan wrote:
> > > joel garry wrote:
> > >
> > > > Well, I don't know about the OP, but on one system, if I increased undo
> > > > to where a script tells me it "should be," I would have to quadruple
> > > > the tablespace from 10G to 40G.  Now, that system
> > > > has two days online full backups available, so that means an additional
> > > > 90G.
> > > >
> > > > jg
> > > > --
> > > > @home.com is bogus.
> > > > http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html
> > >
> > > Only if your backup is brain-dead. Aren't you using RMAN?
> >
> > I'm using 9i RMAN.  What about it?  It takes my 10G undo file (which
> > just now is using under 700M) and two other 2G files (which are using
> > about 650M)  and puts them into 1 11G piece.  I get about 70%
> > compression when I compress the pieces to an off-SAN device.
> >
> > >
> > > And just in case someone interprets that answer as meaning you should do
> > > what the v$ tells you too ... are there any 1555s or other issues?
> >
> > If I don't kill off leftover sessions nightly, yes.  undo retention is
> > 10 hours.
> >
> > jg
> > --
> > @home.com is bogus.
> > "...that's not how class-action litigation is supposed to work."
> > http://www.signonsandiego.com/uniontrib/20061205/news_1b5lerach.html
>
> Your point is taken regarding "cheap", but I have found that the DBA
> often buys into problems that they shouldn't such as having to work
> around backup related disk shortages.  RMAN is a great tool, but even
> it is limited when the database grows to sufficient size.  "Cheap in
> this instance involves high commit rate and related log sync waits
> debugging vs increasing undo and getting the data loaded and then
> possibly resizing undo after it completes.  I suggest it would be
> cheaper or optimal to simple increase undo in this situation.  A 9
> million row insert( not knowing the actual row size) at 200 bytes per
> row comes out to under 2gb.  So, if there are indexes, that would bump
> up undo, but still, let's gestimate at 5gb more undo for this insert.
> Is it worth the time in this situation?  Your situation/undo mileage
> may vary.

My situation/undo is, for your way of doing it, undo would need to be
larger than the data in the database.  commit=y makes much more sense,
you don't have to worry so much about a slight increase in data after
testing, or some wierdness in segment alignment, blowing the actual
live load.  Of course, that's a judgement call on my part, I don't want
to give up my weekends and new years eve to babysit this stuff.  Set it
up in cron and forget about it until normal work hours.

I agree with your point about buying into problems people shouldn't.
As a contractor,
I don't even get buy-in that a DBA is necessary, or that computers are
cheaper than
people, or that salespeople aren't necessarily the best system
integration experts.  Truly a strange result considering the people
making the decisions are budget, IS management and cost accounting
types, one would think they would understand the limits of _their_
tools.

jg
-- 
@home.com is bogus.
http://www.rocketracingleague.com/

0
joel-garry (4553)
12/5/2006 11:26:00 PM
joel garry wrote:
> EscVector wrote:
> > joel garry wrote:
> > > DA Morgan wrote:
> > > > joel garry wrote:
> > > >
> > > > > Well, I don't know about the OP, but on one system, if I increased undo
> > > > > to where a script tells me it "should be," I would have to quadruple
> > > > > the tablespace from 10G to 40G.  Now, that system
> > > > > has two days online full backups available, so that means an additional
> > > > > 90G.
> > > > >
> > > > > jg
> > > > > --
> > > > > @home.com is bogus.
> > > > > http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html
> > > >
> > > > Only if your backup is brain-dead. Aren't you using RMAN?
> > >
> > > I'm using 9i RMAN.  What about it?  It takes my 10G undo file (which
> > > just now is using under 700M) and two other 2G files (which are using
> > > about 650M)  and puts them into 1 11G piece.  I get about 70%
> > > compression when I compress the pieces to an off-SAN device.
> > >
> > > >
> > > > And just in case someone interprets that answer as meaning you should do
> > > > what the v$ tells you too ... are there any 1555s or other issues?
> > >
> > > If I don't kill off leftover sessions nightly, yes.  undo retention is
> > > 10 hours.
> > >
> > > jg
> > > --
> > > @home.com is bogus.
> > > "...that's not how class-action litigation is supposed to work."
> > > http://www.signonsandiego.com/uniontrib/20061205/news_1b5lerach.html
> >
> > Your point is taken regarding "cheap", but I have found that the DBA
> > often buys into problems that they shouldn't such as having to work
> > around backup related disk shortages.  RMAN is a great tool, but even
> > it is limited when the database grows to sufficient size.  "Cheap in
> > this instance involves high commit rate and related log sync waits
> > debugging vs increasing undo and getting the data loaded and then
> > possibly resizing undo after it completes.  I suggest it would be
> > cheaper or optimal to simple increase undo in this situation.  A 9
> > million row insert( not knowing the actual row size) at 200 bytes per
> > row comes out to under 2gb.  So, if there are indexes, that would bump
> > up undo, but still, let's gestimate at 5gb more undo for this insert.
> > Is it worth the time in this situation?  Your situation/undo mileage
> > may vary.
>
> My situation/undo is, for your way of doing it, undo would need to be
> larger than the data in the database.  commit=y makes much more sense,
> you don't have to worry so much about a slight increase in data after
> testing, or some wierdness in segment alignment, blowing the actual
> live load.  Of course, that's a judgement call on my part, I don't want
> to give up my weekends and new years eve to babysit this stuff.  Set it
> up in cron and forget about it until normal work hours.
>
> I agree with your point about buying into problems people shouldn't.
> As a contractor,
> I don't even get buy-in that a DBA is necessary, or that computers are
> cheaper than
> people, or that salespeople aren't necessarily the best system
> integration experts.  Truly a strange result considering the people
> making the decisions are budget, IS management and cost accounting
> types, one would think they would understand the limits of _their_
> tools.
>
> jg
> --
> @home.com is bogus.
> http://www.rocketracingleague.com/

I would look to minimize undo when loading.  10GB in my book is still
small.  The focus here was to suggest a solution to the issue that does
not involve skilled analysis, but rather a simple solution that
involves disk space.  I'd suggest giving management Goldratt's The Goal.

0
Junk6218 (272)
12/6/2006 4:31:41 PM
EscVector wrote:
> joel garry wrote:
> > EscVector wrote:
> > > joel garry wrote:
> > > > DA Morgan wrote:
> > > > > joel garry wrote:
> > > > >
> > > > > > Well, I don't know about the OP, but on one system, if I increased undo
> > > > > > to where a script tells me it "should be," I would have to quadruple
> > > > > > the tablespace from 10G to 40G.  Now, that system
> > > > > > has two days online full backups available, so that means an additional
> > > > > > 90G.
> > > > > >
> > > > > > jg
> > > > > > --
> > > > > > @home.com is bogus.
> > > > > > http://geeksaresexy.blogspot.com/2006/01/freeze-your-hard-drive-to-recover-data.html
> > > > >
> > > > > Only if your backup is brain-dead. Aren't you using RMAN?
> > > >
> > > > I'm using 9i RMAN.  What about it?  It takes my 10G undo file (which
> > > > just now is using under 700M) and two other 2G files (which are using
> > > > about 650M)  and puts them into 1 11G piece.  I get about 70%
> > > > compression when I compress the pieces to an off-SAN device.
> > > >
> > > > >
> > > > > And just in case someone interprets that answer as meaning you should do
> > > > > what the v$ tells you too ... are there any 1555s or other issues?
> > > >
> > > > If I don't kill off leftover sessions nightly, yes.  undo retention is
> > > > 10 hours.
> > > >
> > > > jg
> > > > --
> > > > @home.com is bogus.
> > > > "...that's not how class-action litigation is supposed to work."
> > > > http://www.signonsandiego.com/uniontrib/20061205/news_1b5lerach.html
> > >
> > > Your point is taken regarding "cheap", but I have found that the DBA
> > > often buys into problems that they shouldn't such as having to work
> > > around backup related disk shortages.  RMAN is a great tool, but even
> > > it is limited when the database grows to sufficient size.  "Cheap in
> > > this instance involves high commit rate and related log sync waits
> > > debugging vs increasing undo and getting the data loaded and then
> > > possibly resizing undo after it completes.  I suggest it would be
> > > cheaper or optimal to simple increase undo in this situation.  A 9
> > > million row insert( not knowing the actual row size) at 200 bytes per
> > > row comes out to under 2gb.  So, if there are indexes, that would bump
> > > up undo, but still, let's gestimate at 5gb more undo for this insert.
> > > Is it worth the time in this situation?  Your situation/undo mileage
> > > may vary.
> >
> > My situation/undo is, for your way of doing it, undo would need to be
> > larger than the data in the database.  commit=y makes much more sense,
> > you don't have to worry so much about a slight increase in data after
> > testing, or some wierdness in segment alignment, blowing the actual
> > live load.  Of course, that's a judgement call on my part, I don't want
> > to give up my weekends and new years eve to babysit this stuff.  Set it
> > up in cron and forget about it until normal work hours.
> >
> > I agree with your point about buying into problems people shouldn't.
> > As a contractor,
> > I don't even get buy-in that a DBA is necessary, or that computers are
> > cheaper than
> > people, or that salespeople aren't necessarily the best system
> > integration experts.  Truly a strange result considering the people
> > making the decisions are budget, IS management and cost accounting
> > types, one would think they would understand the limits of _their_
> > tools.
> >
> > jg
> > --
> > @home.com is bogus.
> > http://www.rocketracingleague.com/
>
> I would look to minimize undo when loading.  10GB in my book is still

So isn't that what commit=Y does?  (Actually, in the ETL I'm currently
working
on, I have a choice between culling data with multiple exp's using
QUERY, or
doing massive deletes after imp.  Guess which has less undo and other
impacts?)

> small.  The focus here was to suggest a solution to the issue that does
> not involve skilled analysis, but rather a simple solution that
> involves disk space.  I'd suggest giving management Goldratt's The Goal.

Well, when you are talking hundreds of tables, some of which are
enigmas
wrapped in wheels within wheels, some skill is necessary.  You can rest
assured
I am minimizing complexity.

Management has its own fads, the only impact I can have is to show
better
results than my competition.  I make suggestions, but I'm not going to
tell them how to run their business.  I'm not in the business of
competing
with the USC or UC graduate schools of management.  Especially when my
work derives from the fact that this company is more successful than
other
similar companies, so it is going around buying them up and someone has
to
deal with the db implications of that.  It's just one more
demonstration of the
fact that arbitrary tactical decisions in the IS department don't mean
much
in the overall business decision process, strategic application success
is
more important than database tuning details.  The fact that the
db-independent application depends on the scalabilty of Oracle can be
our
little secret.

jg
--
@home.com is bogus.
"Intimate friendly cooperation between the management and the men"
http://melbecon.unimelb.edu.au/het/taylor/sciman.htm
Yeah, right!  Gonzo stacks 137 blocks per hour, so we'll dock your pay
if you don't!

0
joel-garry (4553)
12/6/2006 10:21:34 PM
Reply:

Similar Artilces:

Increase in 'log file sync' waits while 'log file parallel write' remains unchanged
We have small 9.2.0.8 database on AIX, from time to time it experiences waits on 'log file sync'. I am unable to explain these waits: - There is no increase in the number or size of transaction - While average wait on 'log file sync' increases from 10ms to 124ms average wait on 'log file parallel write' does not change much - approx 0.05 ms. NORMAL PERIOD: Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 30,016.82 1,384.01 Logical reads: 10,273.75 473.70 Block changes: 179.17 8.26 Physical reads: 5.36 0.25 Physical writes: 6.98 0.32 User calls: 139.45 6.43 Parses: 152.69 7.04 Hard parses: 0.11 0.01 Sorts: 81.40 3.75 Logons: 0.01 0.00 Executes: 587.69 27.10 Transactions: 21.69 PERIOD WITH WAITS ON LOG FILE SYNC Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 29,366.27 1,385.22 Logical reads: ...

Re: Wait Time ( or 'file d'attente')
Hamani - E(wait) is the mean, right? data wait; input wait; datalines; 0 5 10 15 20 60 90 120 ; proc means data=wait fw=8 CLM CSS SKEW CV STD KURT STDERR LCLM SUM MAX SUMWGT MEAN UCLM MIN USS N VAR NMISS P50 P75 P1 P90 P5 P95 P10 P99 P25 QRANGE PROBT T ; var wait; run; The MEANS Procedure Analysis Variable : wait Lower 95% Upper 95% Coeff of CL for Mean CL for Mean CSS Skewness Variation =============================================================== 2.5453 77.4547 14050.0 1.0326 112.0 2.5453 77.4547 =============================================================== Analysis Variable : wait Std Std Dev Kurtosis Error Sum Maximum Sum Wgts =============================================================== 44.8011 -0.4072 15.8396 320.0 120.0 8.0000 =============================================================== Analysis Variable : wait N 50th Mean Minimum USS N Variance Miss Pctl =============================================================== 40.0000 0 26850.0 8 2007.1 0 17.5000 =============================================================== Analysis Variable...

'is not' or '!='
A newbie question to you; what is the difference between statements like: if x is not None: and if x != None: Without any context, which one should be preferred? IMHO, the latter is more readable. On 2014-08-18 21:35, ElChino wrote: > A newbie question to you; what is the difference between statements > like: > if x is not None: > and > if x != None: > > Without any context, which one should be preferred? > IMHO, the latter is more readable. > "x == y" tells you whether x and y refer to objects that are equal. "x is y" tells you whether x and y actually refer to the same object. In the case of singletons like None (there's only one None object), it's better to use "is". "ElChino" <elchino@cnn.cn>: > A newbie question to you; what is the difference between statements > like: > if x is not None: > and > if x != None: Do the following: take two $10 bills. Hold one bill in the left hand, hold the other bill in the right hand. Now, the bill in the left hand "is not" the bill in the right hand. However, the bill in the left hand "==" the bill in the right hand. > Without any context, which one should be preferred? > IMHO, the latter is more readable. In almost all cases, both tests would result in the same behavior. However, the "is not" test is conceptually the correct one since you want...

'^=' and '~='?
Hello, What is the difference between '^=' and '~='? Thanks, Duckhye ...

'cat file' but only if 'file' exist
I'm trying to run 'cat *.x' only if *.x files exist. What is the cleanest way of doing this? I'm doing it by shopt -s nullglob for i in *.x; do cat $i; done but this is aweful typing. -- William Park, Open Geometry Consulting, <opengeometry@yahoo.ca> No, I will not fix your computer! I'll reformat your harddisk, though. On 2004-06-09, William Park wrote: > I'm trying to run 'cat *.x' only if *.x files exist. What is the > cleanest way of doing this? I'm doing it by > shopt -s nullglob > for i in *.x; do cat $i; done > but this is aweful typing. set -- *.x [ -f "$1" ] && cat -- *.x -- Chris F.A. Johnson http://cfaj.freeshell.org/shell =================================================================== My code (if any) in this post is copyright 2004, Chris F.A. Johnson and may be copied under the terms of the GNU General Public License 2004-06-9, 06:26(+00), William Park: > I'm trying to run 'cat *.x' only if *.x files exist. What is the > cleanest way of doing this? I'm doing it by > shopt -s nullglob > for i in *.x; do cat $i; done > but this is aweful typing. zsh -c 'cat ./*.x' With bash: shopt -s nullglob files=(./*.x) if (( ${#files[@]} > 0 )); then cat "${files[@]}" else printf >&2 '%s\n' "${0##*/}: no *.x files" false fi You could also do: c...

``awk '!a[$0]++' file'' and ``awk '{if(!($0 in rec)) {rec[$0]=1; print $0;}}' file ''
Hi all, Currently, I'm reading the book on awk programming language, but meet the following two examples which I cann't understand so well: awk '!a[$0]++' file and awk '{if(!($0 in rec)) {rec[$0]=1; print $0;}}' file Could someone here please give me some hints or explanations on the logic of above codes? Regards -- ..: Hongyi Zhao [ hongyi.zhao AT gmail.com ] Free as in Freedom :. In article <meuejm$gsf$1@aspen.stu.neva.ru>, Hongyi Zhao <hongyi.zhao@gmail.com> wrote: > Hi all, > > Currently, I'm reading the bo...

web('file', 'filename' '-browser') not working
Hello, I'm working on a project involving Matlab and a nice flashy GUI. To call help files, I do: web('file', 'help.html' '-browser'); On my computer, this works. After compiling it, it works. If I manage to get the current directory wrong, somehow, the browser attempts to go to www.help.com, rather than put up an error. So far so good, except when the file isn't there, which I'm ignoring for the moment. However, when I run the program on other computers (I compile the program first), it always goes to www.help.com, no matter what the pwd says, or where I put help.html. This seems to happen on about half of the other computers I try it on. Any ideas? I had thought it was just path mixups but putting 'pwd' in the appropriate places seems to indicate nothing is amiss. I do not want to use explicit path names, (ie 'file:///c:/stuff/help.html' syntax) as this program is meant for some distribution and I can't guarantee that people will always have it running from the same location. Thank you in advance, --Anne I finally did a = sprintf('file:///%shelp.html#%s', pwd, section); web('file', a, '-browser'); I discovered that the file:/// syntax does not require paths to be deliminated by '/' but can use '\' unlike what I had thought. I read a bug report that using the internal web browser crashes when using compiled code. However, a similar crash happens for me when using R2006a....

lots of waiting on 'db file parallel write'
Folks, I'm currently trying to get to the bottom of why my 4 database writters spend so much time waiting on 'db file parallel write'... even when it appears that the system is not terribly busy. Environment: Oracle 10.1.0.4 Solaris 8 - 64bit IBM SAN T700 Veritas filesystem (mostly everything is 0+1 raid with a few luns at raid 5). 4 db writters So at different times during the day I notice that the system just "hangs". When I check what is waiting I see the number of wait seconds on my dbwn processes is anywhere from 10-140 seconds. Obviously if the db writters can go fast enough, the I start to see sessions pile up and lots of waits on 'free buffer wait'. Our system is growing very fast because the company is growing at almost 50% rate...so I'm just not sure if we are hitting hardware limitations. I do have an itar open with Oracle, but I'm just not getting anywhere with them. Our system is running faster than ever, but now at certain times during the day it just hangs... Here are some parameter settings DB_CACHE_SIZE = 2GB SHARED_POOL_SIZE = 1GB. Something interesting in the AWR reports.... Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 416,701.01 17,329.98 Logical reads: 27,387.68 1,139.01 Block chan...

10046 trace: 'latch free' then 'db file seq. read' waits
Hi, When you find a series of 'latch free' waits with p2=66 in a 10046 trace file that are immediately followed by a 'db file sequential read' wait (and then again 'latch free' p2=66 lines until another 'db file seq. read' line with different file#/block# values etc etc.) does it mean that Oracle now shows a wait for a seq. read on this very p1=file# and p2=block# on which it was trying to get a 'cache buffer chain' latch a second before ? (to me the answer is yes but... one never knows) Thanks. Sp Spendius wrote: > Hi, > When you find a series of 'latch free' waits with p2=66 in a 10046 > trace file that are immediately followed by a 'db file sequential read' > > wait (and then again 'latch free' p2=66 lines until another 'db file > seq. read' line with different file#/block# values etc etc.) does it > mean that Oracle now shows a wait for a seq. read on this very > p1=file# and p2=block# on which it was trying to get a 'cache buffer > chain' latch a second before ? > > (to me the answer is yes but... one never knows) > > Thanks. > Sp > Latch# 66 is this: SQL> select name from v$latch where latch#=66; NAME ---------------------------------------------------------------- KJCT flow control latch SQL> KJCT is a RAC communication layer. That latch wait means that before each read, Oracle is checking minor and unimportant things like whe...

Difference between a 'file' and a 'table'
With OS400, all files are objects that can be accessed relationally. When I view the file system from Navigator (windows client), I see there is the Database and SQL tables, then there is the filesystem. I can manipulate both these objects (tables, files) with SQL. So, what is the difference between a table and a file? -- Texeme http://texeme.com Without being strict in its definition a file and a table in OS/400 are the same. If you go into object types you will find that there is a Type *FILE and a Type *TBL, maybe I will ask the next question, what is the difference ...

'to file' and 'to workspace' help
I have a model that runs a 4 degrees of freedom robot here, right now I am only working with rotating the robot though. I have the model using a repeating sequence stair block as an input, this waits 10 seconds spins 5 degrees waits 10 secs spins 5 more degrees all the way up to 360 then i have a stop simulation block that stops the simulation once it has reached 360. The model works perfectly for my purpose, but in my feedback loop that reads my encoder position i am trying to insert a to file or to workspace block so that i can keep track of the encoder readings while the robot is stopped ...

'''''''''''''The Running Update/Append Queries Using VBA code Ordeal'''''''''''''' #2
Hi, Thanks for ur help there HJ. I know how to do the tasks you specified there. I would like for the update query to use field values from some of the fields on the form (frmInvoices) such as InvoiceNumber, DateFrom, DateTo. My problem is that an append/update query can't find the values in the open Form (frmInvoices) when I specify them as; [Forms]![frmInvoices]![InvoiceNumber] a select query has no problem finding the field values on a form. please help. Aaron Hi Aaron, Could you post the entire code that you are having trouble with? Now it is not possible to see what goes wrong. HJ "Aaron" <aaron@rapid-motion.co.uk> wrote in message news:260d7f40.0408120245.2f3d01f8@posting.google.com... > Hi, > > Thanks for ur help there HJ. > > I know how to do the tasks you specified there. > > I would like for the update query to use field values from some of the > fields on the form (frmInvoices) such as InvoiceNumber, DateFrom, > DateTo. My problem is that an append/update query can't find the > values in the open Form (frmInvoices) when I specify them as; > > [Forms]![frmInvoices]![InvoiceNumber] > > a select query has no problem finding the field values on a form. > > please help. > > Aaron First off, if you are not always using all the parameters specified in your form, then you have to add parameters to your query on the fly. Also, you can't just do something like qdf.SQL = "SE...

if str_mo not in ('','.') and str_da not in ('','.') and str_yy not in ('','.') Any shorter ?
Hi, there. =20 I'm just curious if it ever dawned on anybody how to abbreviate this line : if str_mo not in ('','.') and str_da not in ('','.') and str_yy not in ('','.')=20 =20 Igor Kurbeko Clinical Programmer Analyst 678 336 4328 ikurbeko@atherogenics.com =20 no brain no pain =20 how about: if not (str_mo in ('','.') or str_da in ('','.') or str_yy in ('','.')) OR if not (missing(str_mo) or missing(str_da) or missing(str_yy)) Eric On 22 Oct 03 21:13:37 GMT, ikurbeko@ATHER...

I try to logging data using cRIO NI 9014 . I make binary file on the Host and convert to LVM file on Windows but Time start(s) is not Zero I'don't know why
I can have LVM from binary but I have a problem in time (x value) I want start at time is Zero . I don't know why my program start at time is 0.682647 please help me &nbsp;Thank you very much windows.png: http://forums.ni.com/attachments/ni/170/343516/1/windows.png cRIO NAI.zip: http://forums.ni.com/attachments/ni/170/343516/2/cRIO NAI.zip host.png: http://forums.ni.com/attachments/ni/170/343516/3/host.png I get my data&nbsp;at this picture LVM.PNG: http://forums.ni.com/attachments/ni/170/343807/1/LVM.PNG Hi&nbsp; KNAI, What is the time set to on your cRIO controller?&nbsp; Check out <a href="http://digital.ni.com/public.nsf/allkb/B9D619C68F7D5F5D86257251007E2491?OpenDocument" target="_blank">this knowledge</a>base on how to set you cRIO time and make sure that you are getting the expected results.&nbsp; If so, then we can do more troubleshooting on your code.&nbsp; Thanks.&nbsp; ...

Tuning 'log file sync'
One of our database experiences heavy waits on 'log file sync'. Top waits from V$SYSTEM_EVENT are given below. 'Log file sync' exceeds 'db file sequential read' and 'db file scattered read'. Metalink Note 223117.1 says that they key to understanding 'log file sync' is to compare average times waited for 'log file sync' and 'log file parallel write': if they are similar then it is I/O problem, but if they are different then "the delay is caused by the other parts of Redo Logging mechanism that occur during a COMMIT/ROLLBACK (and are not I/O-related)". How to find out what are these "other parts"? This is 9.2.0.5.0 on Tru64. Thanks Time Average Waited Wait Event (sec) (sec) ------------------------ ------- ------ SQL*Net message from c 9836478 .19 rdbms ipc message 798023 .30 PX Idle Wait 763886 2.15 enqueue 387408 .71 log file sync 237811 .21 pmon timer 190927 3.11 smon timer 186319 5.18 db file sequential read 85030 0 db file parallel write 57577 .20 local write wait 41371 .42 rdbms ipc reply 32636 .33 log file parallel write 27843 .03 latch free 8538 .01 "Vsevolod Afanassiev" <vafanassiev@yahoo.com> w...

time with 'Write to measurement file'
Hello, &nbsp; I am making an acquisition waveform. The integration time is 1s divided in 128 samples. When I execute the&nbsp;software, the datas are stored in a text file with 'Write to measurement file'. This text file is a continuous list of 128 numbers-packs in rows. &nbsp; I want to add the time column&nbsp;(date+hour+minutes+secondes+1/10sec+1/100sec) in front of each number acquired. (And not only in front of the first number of the 128 package) &nbsp; I have don t know how to add this. &nbsp; Can you help me? Thanks a lot. &nbsp; (See file attached) test3.vi: http://forums.ni.com/attachments/ni/170/231045/1/test3.vi Also, in the setup dialog for the Write Measurement File, there's an option to include an "X Column Value" that normally holds a timestamp. &nbsp; <img src="http://forums.ni.com/attachments/ni/170/231116/1/WLVMF%20Time%20Column.gif"> &nbsp; Enable one of these options and any included timestamp in your waveform will go into this column. If you don't have a waveform, you can build an extra column in your array of data points with the needed timestamp and it will work the same. &nbsp; EdMessage Edited by Ed Dickens on 02-20-2007 03:37 PM WLVMF Time Column.gif: http://forums.ni.com/attachments/ni/170/231116/1/WLVMF Time Column.gif Your main problem is that you are actually losing the timestamps when you do the FFT on the waveform. The FFT outputs frequency information i...

Is 'du -b file' equivalent to 'wc -c file'?
Hi all, It seems that both 'du -b file' and 'wc -c file' can give the correct bytes count on the file. Are they equivalent for all case on this type of job? Regards -- ..: Hongyi Zhao [ hongyi.zhao AT gmail.com ] Free as in Freedom :. On 2015-05-22, Hongyi Zhao <hongyi.zhao@gmail.com> wrote: > Hi all, > > It seems that both 'du -b file' and 'wc -c file' can give the correct > bytes count on the file. Are they equivalent for all case on this type of > job? > > Regards There's no -b switch for du on my system (Mac OS X 10.10.3), and it's not in POSIX. The -c switch for wc is however standard, so use that instead. Note that you will have to use -m with wc to always get the correct number of *characters* in a file. -- :: Andreas Kusalananda Kahari, Uppsala University, Sweden :: :: a n d r e a s . k a h a r i @ i c m . u u . s e :: On Fri, 22 May 2015 06:52:32 +0000, Kusalananda wrote: > There's no -b switch for du on my system (Mac OS X 10.10.3), I can find the following for my case manpage of du: -b, --bytes equivalent to '--apparent-size --block-size=1' --apparent-size print apparent sizes, rather than disk usage; although the apparent size is usually smaller, it may be larger due to holes in ('sparse') files, internal fragmentation, indirect blocks, and the like -B, --block-size=SIZE scale sizes by SIZE befo...

A function with 'and' , 'not' , 'null' , 'car' and 'cdr'
What's this ? (defun enigma (x) (and (not (null x)) (or (null (car x)) (enigma (cdr x))))) "I suppose I should learn Lisp, but it seems so foreign." - Paul Graham, Nov 1983 On Wed, Oct 07 2015, CAI GENGYANG wrote: > What's this ? > > > (defun enigma (x) > (and (not (null x)) > (or (null (car x)) > (enigma (cdr x))))) Bad taste? It returns T if the list X contains nil as an element. It would be clearer to write (some #'null x). Helmut CAI GENGYANG ...

Issue with the 'script' command in users '.kshrc' and '.bashrc' files
I am attempting to monitor users actions by using the following 'script' command: # exec script -a /tmp/${LOG} In order to capture all types of login whether 'telnet', su, su - etc etc I have put it into the .profile, .dtprofile and the *rc shell files. This works fine for CSH and SH, however if I launch another session (dtterm &) or switch shell to either KSH or BASH, then it seems to spin round and attempt to infinately create fresh 'script' sessions until I quit out. Example below: Script started, file is /tmp/x Script started, file is /tmp/x Script started,...

error: expected '=', ',', ';', 'asm' or '__attrib
Hi I'm trying to compile an ADC Driver & come acrosss the following error. I've no experience writing drivers before, and hence have no clue how to fix it. Hope someone out there has encountered the problem & suggesst a fix for the same. The Error is I get is : qadc.c: At top level: qadc.c:97: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'qadc_read' make: *** [qadc.o] Error 1 [root@localhost qadc]# ########################################################################### ADC Driver Code ##...

error: expected '=', ',', ';', 'asm' or '__attrib
Hi I'm trying to compile an ADC Driver & come acrosss the following error. I've no experience writing drivers before, and hence have no clue how to fix it. Hope someone out there has encountered the problem & suggesst a fix for the same. The Error is I get is : qadc.c: At top level: qadc.c:97: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'qadc_read' make: *** [qadc.o] Error 1 [root@localhost qadc]# ########################################################################### ADC Driver Code ########################################################################### #define MODULE #define __KERNEL__ #include <linux/config.h> #include <linux/module.h> #include <linux/kernel.h> /* printk */ #include <linux/fs.h> / #include <linux/errno.h> /* error codes */ #include <linux/types.h> /* size_t */ #include <linux/proc_fs.h> /* proc file system */ #include <linux/fcntl.h> #include <asm/system.h> /* cli, flags */ #include <asm/uaccess.h> /* copy from/to user */ /*Registers to get qadc access*/ volatile unsigned short * qadcmcr = (unsigned short *)0x40190000; volatile unsigned short * qacr0 = (unsigned short *)0x4019000a; volatile unsigned short * qacr1 = (unsigned short *)0x4019000c; volatile unsigned short * qacr2 = (unsigned short *)0x4019000e; volatile unsigned short * qasr0 = (unsigned short *)0x40190010; volatile unsigned short * qasr1...

logical to 'on' / 'off'
Hi, is there a function implemented doing this conversion? my Problem is, that I want to use the following code: set(handles.edit_curr_trq_sl,'Enable',get(hObject,'Value')) where get(hObject,'Value') gives the state of a checkbox thank you! function [str]=tf2oo(logic) switch logic case 0 str='off'; case 1 str='on'; end%switch end%function tf2oo() while i do not know a built in function, I use my own:) meisterbartsch wrote: > > > function [str]=tf2oo(logic) > switch logic > case 0 > str='off'; &g...

Override 'and' and 'or'
Is it possible to override 'and' and/or 'or'? I cannot find a special method for it... __and__ and __rand__ and __or__ and __ror__ are for binary manipulation... any proposals? Have marvelous sunday, Marco Dekker <m.aschwanden@gmail.com> wrote: > Is it possible to override 'and' and/or 'or'? I cannot find a special > method for it... __and__ and __rand__ and __or__ and __ror__ are for > binary manipulation... any proposals? If you want to customize the truth value testing you have to implement __nonzero__ " __nonzero__( self) Call...

'a'..'z'
Is it possible to achieve something like this? switch (mystring.charAt(0)) { case 'a'..'z': // do something break; } "cruster" <cruster@gmail.com> wrote in message news:1151319731.988814.326200@m73g2000cwd.googlegroups.com... > Is it possible to achieve something like this? > > switch (mystring.charAt(0)) { > case 'a'..'z': > // do something > break; > } > There are times when an if statement may be more appropriate ;) Sorry - java is not VB :) -- LTP :) cruster schreef: > Is it possible to achieve somethi...

Web resources about - Very high 'log file sync' wait time with no 'log file parallel write' wait time - comp.databases.oracle.server

Resources last updated: 3/20/2016 12:47:35 AM