f



log file sync vs log file parallel write probably not bug 2669566

This is a continuation of  a previous thread about =91log file sync=92 and
=91log file parallel write=92 events.

Version       :  9.2.0.8
Platform      : Solaris
Application : Oracle Apps

The number of commits per second ranges between 10 and 30.

When querying statspack performance data the calculated average wait
time on  the event =91log file sync=92 is on average 10 times the wait
time for the =91log file parallel write=92  event.
Below just 2 samples where the ratio is even about 20.


"snap_time"	      " log file parallel write avg"	"log file sync
avg"	"ratio
11/05/2008 10:38:26    	8,142
156,343	               19.20
11/05/2008 10:08:23	        8,434
201,915	               23.94

So the wait time for a =91log file sync=92 is 10 times the wait time for a
=91log file parallel write=92.

First I thought that I was hitting bug 2669566.
But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper
tool.
And I think that it proves that I am NOT hitting this bug.

Below is a sample of  the output for the log writer.

--  End of snap 3
HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC                ,
DELTA, DELTA/SEC,  HDELTA, HDELTA/SEC
DATA, 4, 20081105 10:35:41, 30, STAT, messages sent               ,
1712,     57,        1.71k,       57.07
DATA, 4, 20081105 10:35:41, 30, STAT, messages received         ,
866,     29,         866,         28.87
DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts       ,
10,      0,           10,           .33
DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage                  ,
212820,   7094,      212.82k,   7.09k
DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching
time   ,       2,      0,            2,            .07
DATA, 4, 20081105 10:35:41, 30, STAT, redo
writes                     ,     867,     29,          867,
28.9
DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written          ,
33805,   1127,      33.81k,    1.13k
DATA, 4, 20081105 10:35:41, 30, STAT, redo write
time                ,     652,     22,          652,        21.73
DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message         ,
23431084, 781036,     23.43s,  781.04ms
DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel
write          , 6312957, 210432,     6.31s,   210.43ms
DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy  ,
18749,    625,      18.75ms, 624.97us

When adding the DELTA/SEC (which is in micro seconds) for the wait
events it always roughly adds up to a million micro seconds.
In the example above 781036 + 210432 =3D 991468 micro seconds.
This is the case for all the snaps taken by snapper.
So I think that the wait time for the =91log file parallel write time=92
must be more or less correct.

So I still have the question =93Why is the =91log file sync=92 about 10
times the time of the =91log file parallel write=92?=94

Any clues?
0
11/5/2008 12:54:08 PM
comp.databases.oracle.server 22978 articles. 1 followers. Post Follow

6 Replies
1061 Views

Similar Articles

[PageSpeed] 17

On Nov 5, 4:54=A0am, HansP <hans-peter.sl...@atosorigin.com> wrote:
> This is a continuation of =A0a previous thread about =91log file sync=92 =
and
> =91log file parallel write=92 events.
>
> Version =A0 =A0 =A0 : =A09.2.0.8
> Platform =A0 =A0 =A0: Solaris
> Application : Oracle Apps
>
> The number of commits per second ranges between 10 and 30.
>
> When querying statspack performance data the calculated average wait
> time on =A0the event =91log file sync=92 is on average 10 times the wait
> time for the =91log file parallel write=92 =A0event.
> Below just 2 samples where the ratio is even about 20.
>
> "snap_time" =A0 =A0 =A0 =A0 " log file parallel write avg" =A0"log file s=
ync
> avg" =A0 =A0 =A0 "ratio
> 11/05/2008 10:38:26 =A0 =A0 8,142
> 156,343 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A019.20
> 11/05/2008 10:08:23 =A0 =A0 =A0 =A0 =A0 =A0 8,434
> 201,915 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A023.94
>
> So the wait time for a =91log file sync=92 is 10 times the wait time for =
a
> =91log file parallel write=92.
>
> First I thought that I was hitting bug 2669566.
> But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper
> tool.
> And I think that it proves that I am NOT hitting this bug.
>
> Below is a sample of =A0the output for the log writer.
>
> -- =A0End of snap 3
> HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0,
> DELTA, DELTA/SEC, =A0HDELTA, HDELTA/SEC
> DATA, 4, 20081105 10:35:41, 30, STAT, messages sent =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 ,
> 1712, =A0 =A0 57, =A0 =A0 =A0 =A01.71k, =A0 =A0 =A0 57.07
> DATA, 4, 20081105 10:35:41, 30, STAT, messages received =A0 =A0 =A0 =A0 ,
> 866, =A0 =A0 29, =A0 =A0 =A0 =A0 866, =A0 =A0 =A0 =A0 28.87
> DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts =A0 =A0 =A0 ,
> 10, =A0 =A0 =A00, =A0 =A0 =A0 =A0 =A0 10, =A0 =A0 =A0 =A0 =A0 .33
> DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0,
> 212820, =A0 7094, =A0 =A0 =A0212.82k, =A0 7.09k
> DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching
> time =A0 , =A0 =A0 =A0 2, =A0 =A0 =A00, =A0 =A0 =A0 =A0 =A0 =A02, =A0 =A0=
 =A0 =A0 =A0 =A0.07
> DATA, 4, 20081105 10:35:41, 30, STAT, redo
> writes =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 , =A0 =A0 867, =A0 =A0 29,=
 =A0 =A0 =A0 =A0 =A0867,
> 28.9
> DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written =A0 =A0 =A0 =A0=
 =A0,
> 33805, =A0 1127, =A0 =A0 =A033.81k, =A0 =A01.13k
> DATA, 4, 20081105 10:35:41, 30, STAT, redo write
> time =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0, =A0 =A0 652, =A0 =A0 22, =A0 =A0 =
=A0 =A0 =A0652, =A0 =A0 =A0 =A021.73
> DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message =A0 =A0 =A0 =A0 ,
> 23431084, 781036, =A0 =A0 23.43s, =A0781.04ms
> DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel
> write =A0 =A0 =A0 =A0 =A0, 6312957, 210432, =A0 =A0 6.31s, =A0 210.43ms
> DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy =A0,
> 18749, =A0 =A0625, =A0 =A0 =A018.75ms, 624.97us
>
> When adding the DELTA/SEC (which is in micro seconds) for the wait
> events it always roughly adds up to a million micro seconds.
> In the example above 781036 + 210432 =3D 991468 micro seconds.
> This is the case for all the snaps taken by snapper.
> So I think that the wait time for the =91log file parallel write time=92
> must be more or less correct.
>
> So I still have the question =93Why is the =91log file sync=92 about 10
> times the time of the =91log file parallel write=92?=94
>
> Any clues?

This is a wild guess:

The log file parallel write includes the I/O time only.  The log file
sync includes the time the log writer has to wait for the db writer to
tell the log writer the redo is done (all those "private strand flush
incomplete" notifications in the alert log), as well as all the other
overhead of the interprocess communication.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents00=
3.htm#BGGIBDJI

jg
--
@home.com is bogus.
Ya win some, ya lose some.
0
joel-garry (4553)
11/5/2008 6:30:15 PM
On Nov 5, 4:54=A0am, HansP <hans-peter.sl...@atosorigin.com> wrote:
> This is a continuation of =A0a previous thread about =91log file sync=92 =
and
> =91log file parallel write=92 events.
>
> Version =A0 =A0 =A0 : =A09.2.0.8
> Platform =A0 =A0 =A0: Solaris
> Application : Oracle Apps
>
> The number of commits per second ranges between 10 and 30.
>
> When querying statspack performance data the calculated average wait
> time on =A0the event =91log file sync=92 is on average 10 times the wait
> time for the =91log file parallel write=92 =A0event.
> Below just 2 samples where the ratio is even about 20.
>
> "snap_time" =A0 =A0 =A0 =A0 " log file parallel write avg" =A0"log file s=
ync
> avg" =A0 =A0 =A0 "ratio
> 11/05/2008 10:38:26 =A0 =A0 8,142
> 156,343 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A019.20
> 11/05/2008 10:08:23 =A0 =A0 =A0 =A0 =A0 =A0 8,434
> 201,915 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A023.94
>
> So the wait time for a =91log file sync=92 is 10 times the wait time for =
a
> =91log file parallel write=92.
>
> First I thought that I was hitting bug 2669566.
> But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper
> tool.
> And I think that it proves that I am NOT hitting this bug.
>
> Below is a sample of =A0the output for the log writer.
>
> -- =A0End of snap 3
> HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0,
> DELTA, DELTA/SEC, =A0HDELTA, HDELTA/SEC
> DATA, 4, 20081105 10:35:41, 30, STAT, messages sent =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 ,
> 1712, =A0 =A0 57, =A0 =A0 =A0 =A01.71k, =A0 =A0 =A0 57.07
> DATA, 4, 20081105 10:35:41, 30, STAT, messages received =A0 =A0 =A0 =A0 ,
> 866, =A0 =A0 29, =A0 =A0 =A0 =A0 866, =A0 =A0 =A0 =A0 28.87
> DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts =A0 =A0 =A0 ,
> 10, =A0 =A0 =A00, =A0 =A0 =A0 =A0 =A0 10, =A0 =A0 =A0 =A0 =A0 .33
> DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0,
> 212820, =A0 7094, =A0 =A0 =A0212.82k, =A0 7.09k
> DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching
> time =A0 , =A0 =A0 =A0 2, =A0 =A0 =A00, =A0 =A0 =A0 =A0 =A0 =A02, =A0 =A0=
 =A0 =A0 =A0 =A0.07
> DATA, 4, 20081105 10:35:41, 30, STAT, redo
> writes =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 , =A0 =A0 867, =A0 =A0 29,=
 =A0 =A0 =A0 =A0 =A0867,
> 28.9
> DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written =A0 =A0 =A0 =A0=
 =A0,
> 33805, =A0 1127, =A0 =A0 =A033.81k, =A0 =A01.13k
> DATA, 4, 20081105 10:35:41, 30, STAT, redo write
> time =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0, =A0 =A0 652, =A0 =A0 22, =A0 =A0 =
=A0 =A0 =A0652, =A0 =A0 =A0 =A021.73
> DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message =A0 =A0 =A0 =A0 ,
> 23431084, 781036, =A0 =A0 23.43s, =A0781.04ms
> DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel
> write =A0 =A0 =A0 =A0 =A0, 6312957, 210432, =A0 =A0 6.31s, =A0 210.43ms
> DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy =A0,
> 18749, =A0 =A0625, =A0 =A0 =A018.75ms, 624.97us
>
> When adding the DELTA/SEC (which is in micro seconds) for the wait
> events it always roughly adds up to a million micro seconds.
> In the example above 781036 + 210432 =3D 991468 micro seconds.
> This is the case for all the snaps taken by snapper.
> So I think that the wait time for the =91log file parallel write time=92
> must be more or less correct.
>
> So I still have the question =93Why is the =91log file sync=92 about 10
> times the time of the =91log file parallel write=92?=94
>
> Any clues?

This is a wild guess, based on the 10g docs, which may or may not
apply or work the same:

The log file parallel write includes the I/O time only.  The log file
sync includes the time the log writer has to wait for the db writer
to
tell the log writer the redo is done (all those "private strand flush
incomplete" notifications in the 10g alert log), as well as all the
other
overhead of the interprocess communication.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents00=
3.htm#BGGIBDJI

jg
--
@home.com is bogus.
Ya win some, ya lose some.
0
joel-garry (4553)
11/5/2008 6:57:37 PM
> This is a wild guess, based on the 10g docs, which may or may not
> apply or work the same:
>
> The log file parallel write includes the I/O time only. =A0The log file
> sync includes the time the log writer has to wait for the db writer
> to
> tell the log writer the redo is done (all those "private strand flush
> incomplete" notifications in the 10g alert log), as well as all the
> other
> overhead of the interprocess communication.
>
> http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitev...
>
> jg
> --
> @home.com is bogus.
> Ya win some, ya lose some.

Yes the includes waiting for the log file parallel write by the log
writer and some overhead (signalling each other etc)
Therefore I would expect the for the log file sync be a little more
than the time for the log file parallel write but not 10 to 20 time
the
time for the log file parallel write.

regards Hans-Peter
0
11/6/2008 7:51:23 AM
On 5 nov, 19:57, joel garry <joel-ga...@home.com> wrote:
> On Nov 5, 4:54=A0am, HansP <hans-peter.sl...@atosorigin.com> wrote:
>
>
>
> > This is a continuation of =A0a previous thread about =91log file sync=
=92 and
> > =91log file parallel write=92 events.
>
> > Version =A0 =A0 =A0 : =A09.2.0.8
> > Platform =A0 =A0 =A0: Solaris
> > Application : Oracle Apps
>
> > The number of commits per second ranges between 10 and 30.
>
> > When querying statspack performance data the calculated average wait
> > time on =A0the event =91log file sync=92 is on average 10 times the wai=
t
> > time for the =91log file parallel write=92 =A0event.
> > Below just 2 samples where the ratio is even about 20.
>
> > "snap_time" =A0 =A0 =A0 =A0 " log file parallel write avg" =A0"log file=
 sync
> > avg" =A0 =A0 =A0 "ratio
> > 11/05/2008 10:38:26 =A0 =A0 8,142
> > 156,343 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A019.20
> > 11/05/2008 10:08:23 =A0 =A0 =A0 =A0 =A0 =A0 8,434
> > 201,915 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A023.94
>
> > So the wait time for a =91log file sync=92 is 10 times the wait time fo=
r a
> > =91log file parallel write=92.
>
> > First I thought that I was hitting bug 2669566.
> > But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper
> > tool.
> > And I think that it proves that I am NOT hitting this bug.
>
> > Below is a sample of =A0the output for the log writer.
>
> > -- =A0End of snap 3
> > HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0,
> > DELTA, DELTA/SEC, =A0HDELTA, HDELTA/SEC
> > DATA, 4, 20081105 10:35:41, 30, STAT, messages sent =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 ,
> > 1712, =A0 =A0 57, =A0 =A0 =A0 =A01.71k, =A0 =A0 =A0 57.07
> > DATA, 4, 20081105 10:35:41, 30, STAT, messages received =A0 =A0 =A0 =A0=
 ,
> > 866, =A0 =A0 29, =A0 =A0 =A0 =A0 866, =A0 =A0 =A0 =A0 28.87
> > DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts =A0 =A0 =A0 ,
> > 10, =A0 =A0 =A00, =A0 =A0 =A0 =A0 =A0 10, =A0 =A0 =A0 =A0 =A0 .33
> > DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0,
> > 212820, =A0 7094, =A0 =A0 =A0212.82k, =A0 7.09k
> > DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching
> > time =A0 , =A0 =A0 =A0 2, =A0 =A0 =A00, =A0 =A0 =A0 =A0 =A0 =A02, =A0 =
=A0 =A0 =A0 =A0 =A0.07
> > DATA, 4, 20081105 10:35:41, 30, STAT, redo
> > writes =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 , =A0 =A0 867, =A0 =A0 2=
9, =A0 =A0 =A0 =A0 =A0867,
> > 28.9
> > DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written =A0 =A0 =A0 =
=A0 =A0,
> > 33805, =A0 1127, =A0 =A0 =A033.81k, =A0 =A01.13k
> > DATA, 4, 20081105 10:35:41, 30, STAT, redo write
> > time =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0, =A0 =A0 652, =A0 =A0 22, =A0 =A0 =
=A0 =A0 =A0652, =A0 =A0 =A0 =A021.73
> > DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message =A0 =A0 =A0 =A0=
 ,
> > 23431084, 781036, =A0 =A0 23.43s, =A0781.04ms
> > DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel
> > write =A0 =A0 =A0 =A0 =A0, 6312957, 210432, =A0 =A0 6.31s, =A0 210.43ms
> > DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy =A0,
> > 18749, =A0 =A0625, =A0 =A0 =A018.75ms, 624.97us
>
> > When adding the DELTA/SEC (which is in micro seconds) for the wait
> > events it always roughly adds up to a million micro seconds.
> > In the example above 781036 + 210432 =3D 991468 micro seconds.
> > This is the case for all the snaps taken by snapper.
> > So I think that the wait time for the =91log file parallel write time=
=92
> > must be more or less correct.
>
> > So I still have the question =93Why is the =91log file sync=92 about 10
> > times the time of the =91log file parallel write=92?=94
>
> > Any clues?
>
> This is a wild guess, based on the 10g docs, which may or may not
> apply or work the same:
>
> The log file parallel write includes the I/O time only. =A0The log file
> sync includes the time the log writer has to wait for the db writer
> to
> tell the log writer the redo is done (all those "private strand flush
> incomplete" notifications in the 10g alert log), as well as all the
> other
> overhead of the interprocess communication.
>
> http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitev...
>
> jg
> --
> @home.com is bogus.
> Ya win some, ya lose some.

Hello Joel

Perhaps you are interested in the query on statspack.

select snap_id
       , snap_time
       , event
       , time_waited_micro
       , (time_waited_micro - p_time_waited_micro)/((snap_time -
p_snap_time) * 24) corrected_wait_time_h
       , total_waits
       , (total_waits - p_total_waits)/((snap_time - p_snap_time) *
24) corrected_waits_h
       , (total_waits - p_total_waits)/((snap_time - p_snap_time) *
24)/3600 corrected_waits_s
       , trunc(((time_waited_micro - p_time_waited_micro)/((snap_time
- p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time -
p_snap_time) * 24))) average
from (
   select  sn.snap_id, sn.snap_time, se.event, se.time_waited_micro,
se.total_waits,
                   lag(sn.snap_id) over (partition by se.event order
by sn.snap_id) p_snap_id,
                   lag(sn.snap_time) over (partition by se.event order
by sn.snap_time) p_snap_time,
                   lag(se.time_waited_micro) over (partition by
se.event order by sn.snap_id) p_time_waited_micro,
                   lag(se.total_waits) over (partition by se.event
order by sn.snap_id) p_total_waits,
                   row_number() over (partition by event order by
sn.snap_id) r
   from perfstat.stats$system_event se, perfstat.stats$snapshot sn
   where se.SNAP_ID =3D sn.SNAP_ID
   and se.EVENT =3D '&event'
   order by snap_id, event
  )
where time_waited_micro - p_time_waited_micro > 0
order by snap_id desc
0
11/6/2008 9:10:41 AM
On Nov 6, 10:51=A0am, HansP <hans-peter.sl...@atosorigin.com> wrote:
> > This is a wild guess, based on the 10g docs, which may or may not
> > apply or work the same:
>
> > The log file parallel write includes the I/O time only. =A0The log file
> > sync includes the time the log writer has to wait for the db writer
> > to
> > tell the log writer the redo is done (all those "private strand flush
> > incomplete" notifications in the 10g alert log), as well as all the
> > other
> > overhead of the interprocess communication.
>
> >http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitev...
>
> > jg
> > --
> > @home.com is bogus.
> > Ya win some, ya lose some.
>
> Yes the includes waiting for the log file parallel write by the log
> writer and some overhead (signalling each other etc)
> Therefore I would expect the for the log file sync be a little more
> than the time for the log file parallel write but not 10 to 20 time
> the
> time for the log file parallel write.
>
> regards Hans-Peter

This includes the time spent by the process waiting for signal that
log file sync is complete and then time spent in the OS run queue
waiting for the CPU, plus the time occasionally spent by the LGWR in
the run queue. Kevin Closson covered this issue in great detail in his
blog here:
http://kevinclosson.wordpress.com/2007/07/21/manly-men-only-use-solid-state=
-disk-for-redo-logging-lgwr-io-is-simple-but-not-lgwr-processing/.
One of the suggestions in that post is to renice the LGWR to the
highest priority possible so that it always gets CPU time when it
needs before anything else does, and is not preempted by less
important processes. Another post that might be helpful is here:
http://www.pythian.com/blogs/1098/tuning-log-file-sync-event-waits.

Hth,
   Vladimir M. Zakharychev
   N-Networks, makers of Dynamic PSP(tm)
   http://www.dynamicpsp.com
0
11/6/2008 10:24:22 AM
On Nov 6, 2:24=A0am, "Vladimir M. Zakharychev"
<vladimir.zakharyc...@gmail.com> wrote:
> On Nov 6, 10:51=A0am, HansP <hans-peter.sl...@atosorigin.com> wrote:
>
>
>
>
>
> > > This is a wild guess, based on the 10g docs, which may or may not
> > > apply or work the same:
>
> > > The log file parallel write includes the I/O time only. =A0The log fi=
le
> > > sync includes the time the log writer has to wait for the db writer
> > > to
> > > tell the log writer the redo is done (all those "private strand flush
> > > incomplete" notifications in the 10g alert log), as well as all the
> > > other
> > > overhead of the interprocess communication.
>
> > >http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitev.=
...
>
> > > jg
> > > --
> > > @home.com is bogus.
> > > Ya win some, ya lose some.
>
> > Yes the includes waiting for the log file parallel write by the log
> > writer and some overhead (signalling each other etc)
> > Therefore I would expect the for the log file sync be a little more
> > than the time for the log file parallel write but not 10 to 20 time
> > the
> > time for the log file parallel write.
>
> > regards Hans-Peter
>
> This includes the time spent by the process waiting for signal that
> log file sync is complete and then time spent in the OS run queue
> waiting for the CPU, plus the time occasionally spent by the LGWR in
> the run queue. Kevin Closson covered this issue in great detail in his
> blog here:http://kevinclosson.wordpress.com/2007/07/21/manly-men-only-use=
-solid....
> One of the suggestions in that post is to renice the LGWR to the
> highest priority possible so that it always gets CPU time when it
> needs before anything else does, and is not preempted by less
> important processes. Another post that might be helpful is here:http://ww=
w.pythian.com/blogs/1098/tuning-log-file-sync-event-waits.
>
> Hth,
> =A0 =A0Vladimir M. Zakharychev
> =A0 =A0N-Networks, makers of Dynamic PSP(tm)
> =A0 =A0http://www.dynamicpsp.com

Thanks Vladimir, I even thanked Kevin on that thread, so it was kind
of burbling around in the back of my head, I couldn't remember where
from.

jg
--
@home.com is bogus.
"Don=92t do this if you like your database..." - Kevin Closson
0
joel-garry (4553)
11/6/2008 9:15:50 PM
Reply: