f



log file sync vs log file parallel write

Hello all,

While investigating performance problems in a database I discovered
that the 'log file sync' wait event is almost always in the top 5.
(I am looking at the statspack datat)

Calculating the ratio of the average wait time of the 'log file sync'
and the 'log file parallel write'  gives  a ratio of 10.

Does someone know why this may happen?

Regards Hans-Peter
0
10/3/2008 10:31:48 AM
comp.databases.oracle.server 22978 articles. 1 followers. Post Follow

4 Replies
1099 Views

Similar Articles

[PageSpeed] 53

On 3 okt, 12:31, HansP <hans-peter.sl...@atosorigin.com> wrote:
> Hello all,
>
> While investigating performance problems in a database I discovered
> that the 'log file sync' wait event is almost always in the top 5.
> (I am looking at the statspack datat)
>
> Calculating the ratio of the average wait time of the 'log file sync'
> and the 'log file parallel write' =A0gives =A0a ratio of 10.
>
> Does someone know why this may happen?
>
> Regards Hans-Peter

First of all,

'Log file sync' is associated with commit.
If you see this event in the top 5, you are committing way too often
(maybe for every individual record) and it is
developer beating time!!! They don't know what a transaction is, and
they need to be whipped for that.
Secondly 'log file parallel write' obviously deals with flushing out
the redo log buffer to disk. This means your disks are too slow, or
you have an I/O bottleneck.
Please note there is plethora of resources on 'log file sync' online,
particularly in this very forum.
Please force yourself being less lazy and use Google, your friend, to
search this and other forums, instead of asking this boring FAQ AGAIN.
The basic problem with many newbie DBAs is not they are short of
knowledge, the basic problem is they are too lazy, and want to be
spoon fed.
That won't bring you anywhere.

--
Sybrand Bakker
Senior Oracle DBA
0
sybrandb1 (715)
10/3/2008 11:22:11 AM
On Oct 3, 6:31=A0am, HansP <hans-peter.sl...@atosorigin.com> wrote:
> Hello all,
>
> While investigating performance problems in a database I discovered
> that the 'log file sync' wait event is almost always in the top 5.
> (I am looking at the statspack datat)
>
> Calculating the ratio of the average wait time of the 'log file sync'
> and the 'log file parallel write' =A0gives =A0a ratio of 10.
>
> Does someone know why this may happen?
>
> Regards Hans-Peter

Hi Hans-Peter,

When you say...

>>average wait time of the 'log file sync' and the 'log file parallel write=
'  gives  a ratio of 10.<<

....do you mean each log file parallel write takes ten times longer on
average than each log file sync?

Also, what version are you on?  There was a bug in 9i (I forget which)
where "log file parallel write" was "under-reported".  Jonathan Lewis
had a writeup on it.

Regards,

Steve
0
stevedhoward (759)
10/3/2008 12:09:14 PM
First of all I am not a newbie.
I know what the meaning of the events are.

I googled and read books that are available on performance tuning.

The only thing I did not understand is that the average write to the
redo logs is about 10ms.
But why should the log file sync be 10 times as high.

In the mean time I continued reading and came across a not of Jonathan
Lewis about bug 2669566
So I think I may be hitting this bug.

You should not judge that fast.


On 3 okt, 13:22, sybrandb <sybra...@gmail.com> wrote:
> On 3 okt, 12:31, HansP <hans-peter.sl...@atosorigin.com> wrote:
>
> > Hello all,
>
> > While investigating performance problems in a database I discovered
> > that the 'log file sync' wait event is almost always in the top 5.
> > (I am looking at the statspack datat)
>
> > Calculating the ratio of the average wait time of the 'log file sync'
> > and the 'log file parallel write' =A0gives =A0a ratio of 10.
>
> > Does someone know why this may happen?
>
> > Regards Hans-Peter
>
> First of all,
>
> 'Log file sync' is associated with commit.
> If you see this event in the top 5, you are committing way too often
> (maybe for every individual record) and it is
> developer beating time!!! They don't know what a transaction is, and
> they need to be whipped for that.
> Secondly 'log file parallel write' obviously deals with flushing out
> the redo log buffer to disk. This means your disks are too slow, or
> you have an I/O bottleneck.
> Please note there is plethora of resources on 'log file sync' online,
> particularly in this very forum.
> Please force yourself being less lazy and use Google, your friend, to
> search this and other forums, instead of asking this boring FAQ AGAIN.
> The basic problem with many newbie DBAs is not they are short of
> knowledge, the basic problem is they are too lazy, and want to be
> spoon fed.
> That won't bring you anywhere.
>
> --
> Sybrand Bakker
> Senior Oracle DBA

0
10/3/2008 12:13:40 PM
Hello Steve,

I am on 9.2.0.8.
Yes the log file sync is on average 10 times the average log file
parallel write .

I already came across the bug (2669566) on Jonathan Lewis is site.
So probably this bug applies.

regards Hans-Peter

On 3 okt, 14:09, Steve Howard <stevedhow...@gmail.com> wrote:
> On Oct 3, 6:31=A0am, HansP <hans-peter.sl...@atosorigin.com> wrote:
>
> > Hello all,
>
> > While investigating performance problems in a database I discovered
> > that the 'log file sync' wait event is almost always in the top 5.
> > (I am looking at the statspack datat)
>
> > Calculating the ratio of the average wait time of the 'log file sync'
> > and the 'log file parallel write' =A0gives =A0a ratio of 10.
>
> > Does someone know why this may happen?
>
> > Regards Hans-Peter
>
> Hi Hans-Peter,
>
> When you say...
>
> >>average wait time of the 'log file sync' and the 'log file parallel wri=
te' =A0gives =A0a ratio of 10.<<
>
> ...do you mean each log file parallel write takes ten times longer on
> average than each log file sync?
>
> Also, what version are you on? =A0There was a bug in 9i (I forget which)
> where "log file parallel write" was "under-reported". =A0Jonathan Lewis
> had a writeup on it.
>
> Regards,
>
> Steve

0
10/3/2008 12:26:03 PM
Reply:

Similar Artilces:

log file sync vs log file parallel write probably not bug 2669566
This is a continuation of a previous thread about =91log file sync=92 and =91log file parallel write=92 events. Version : 9.2.0.8 Platform : Solaris Application : Oracle Apps The number of commits per second ranges between 10 and 30. When querying statspack performance data the calculated average wait time on the event =91log file sync=92 is on average 10 times the wait time for the =91log file parallel write=92 event. Below just 2 samples where the ratio is even about 20. "snap_time" " log file parallel write avg" "log file sync avg" "ratio 11/05/2008 10:38:26 8,142 156,343 19.20 11/05/2008 10:08:23 8,434 201,915 23.94 So the wait time for a =91log file sync=92 is 10 times the wait time for a =91log file parallel write=92. First I thought that I was hitting bug 2669566. But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper tool. And I think that it proves that I am NOT hitting this bug. Below is a sample of the output for the log writer. -- End of snap 3 HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07 DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87 DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, ...

log file sync vs log file parallel write probably not bug 2669566 #3
This is a continuation of a previous thread about =91log file sync=92 and =91log file parallel write=92 events. Version : 9.2.0.8 Platform : Solaris Application : Oracle Apps The number of commits per second ranges between 10 and 30. When querying statspack performance data the calculated average wait time on the event =91log file sync=92 is on average 10 times the wait time for the =91log file parallel write=92 event. Below just 2 samples where the ratio is even about 20. "snap_time" " log file parallel write avg" "log file sync avg" "ratio 11/05/2008 10:38:26 8,142 156,343 19.20 11/05/2008 10:08:23 8,434 201,915 23.94 So the wait time for a =91log file sync=92 is 10 times the wait time for a =91log file parallel write=92. First I thought that I was hitting bug 2669566. But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper tool. And I think that it proves that I am NOT hitting this bug. Below is a sample of the output for the log writer. -- End of snap 3 HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07 DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87 DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, ...

log file sync vs log file parallel write probably not bug 2669566 #2
This is a continuation of a previous thread about =91log file sync=92 and =91log file parallel write=92 events. Version : 9.2.0.8 Platform : Solaris Application : Oracle Apps The number of commits per second ranges between 10 and 30. When querying statspack performance data the calculated average wait time on the event =91log file sync=92 is on average 10 times the wait time for the =91log file parallel write=92 event. Below just 2 samples where the ratio is even about 20. "snap_time" " log file parallel write avg" "log file sync avg" "ratio 11/05/2008 10:38:26 8,142 156,343 19.20 11/05/2008 10:08:23 8,434 201,915 23.94 So the wait time for a =91log file sync=92 is 10 times the wait time for a =91log file parallel write=92. First I thought that I was hitting bug 2669566. But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper tool. And I think that it proves that I am NOT hitting this bug. Below is a sample of the output for the log writer. -- End of snap 3 HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07 DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87 DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, ...

log file sync , log file parallel write
Hi , my top wait list shows 2 events , log file sync and log file parallel write in 1 hour. During this time , the db server created 370,594 bytes redo per second. the docs say that high value of both log file sync and log file parallel write indicates io problem . How can we decide whether below numbers are close to each other or not? If it is not possible to change the application and eliminate the unnecessary commits (such as sap) , how can lg file sync be decreased? Kind Regards Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 370,594.95 2,966.21 Logical reads: 39,925.48 319.56 Block changes: 2,447.17 19.59 Physical reads: 67.83 0.54 Physical writes: 433.47 3.47 User calls: 880.52 7.05 Parses: 275.49 2.21 Hard parses: 1.92 0.02 Sorts: 553.98 4.43 Logons: 0.29 0.00 Executes: 5,303.67 42.45 Transactions: 124.94 % Blocks changed per Read: 6.13 Recursive Call %: 89.20 Rollback per transaction %: 0.68 Rows per Sort: 11.52 Instance Efficiency Percentages (Tar...

Increase in 'log file sync' waits while 'log file parallel write' remains unchanged
We have small 9.2.0.8 database on AIX, from time to time it experiences waits on 'log file sync'. I am unable to explain these waits: - There is no increase in the number or size of transaction - While average wait on 'log file sync' increases from 10ms to 124ms average wait on 'log file parallel write' does not change much - approx 0.05 ms. NORMAL PERIOD: Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 30,016.82 1,384.01 Logical reads: 10,273.75 473.70 Block changes: 179.17 8.26 Physical reads: 5.36 0.25 Physical writes: 6.98 0.32 User calls: 139.45 6.43 Parses: 152.69 7.04 Hard parses: 0.11 0.01 Sorts: 81.40 3.75 Logons: 0.01 0.00 Executes: 587.69 27.10 Transactions: 21.69 PERIOD WITH WAITS ON LOG FILE SYNC Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 29,366.27 1,385.22 Logical reads: ...

Very high 'log file sync' wait time with no 'log file parallel write' wait time
Oracle 10.2.0.2 SE on Windows 2003 SP1 The following trace file section is from a very slow import session which is importing 9 Million rows into the database. COMMIT call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 0 0.00 0.00 0 0 0 0 Execute 179802 4.90 474.73 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 179802 4.90 474.73 0 0 0 0 Misses in library cache during parse: 0 Parsing user id: 21 (recursive depth: 1) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ log file sync 179077 0.53 468.19 ******************************************************************************** ***Snipped*** ******************************************************************************** OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 0 0.00 ...

Archive logs files accumulating on my Oracle log mode Database
>From what I understand is that an Oracle log mode Database is the recommended configuration for a production database. However, the problem we have is that Archive Logs files in the ..\RDBMS\ directory are accumulating daily which has implications when you have limited disk space. How should these Archive log files be maintained? ie when should they be deleted from the system? is there an automated procedure which will remove the log files on a weekly basis? I understand that there are two ways for performing a backup of the oracle database, 1) cold backup (requires the oracle database to be shutdown) 2) hot backup Should either of the above backup procedures automatically remove the archive log files once backup has been completed? Thanks in advance. James Osso wrote: >>From what I understand is that an Oracle log mode Database is the > recommended configuration for a production database. However, the Not for all production systems... Data ware houses usually are easier rebuilt than restored. Of course, apply common sense and *your* requirements. > problem we have is that Archive Logs files in the ..\RDBMS\ directory You can configure that - pfile, or spfile - depending on version. > are accumulating daily which has implications when you have limited > disk space. > > How should these Archive log files be maintained? ie when should they > be deleted from the system? is there an automated procedure which will > remove the log file...

Oracle database instance shutdown without logging in log file
oracle database instance shutdown without logging in log file. we have Oracle 10g(10.2.0.1)(HP DL380) databse server from Windows 2003 R2 for test/simulation purpose, we found that its shutting down the database sometimes unusally without any error/warning message in the SID alert log file. we observerd that the OS service for oracle is not starting the datbase server completely. Just after hitting start icon inside the services panel it displays the status "Starting", instead of "Started" and database is not in the "Open" state. So the method adopted to start the database from services as well using the "Startup" command from server manager utility. Why does this happen? On Apr 17, 12:19 pm, 7guru <7...@naver.com> wrote: > oracle database instance shutdown without logging in log file. > > we have Oracle 10g(10.2.0.1)(HP DL380) databse server from Windows > 2003 R2 for test/simulation purpose, > we found that its shutting down the database sometimes unusally > without any error/warning message in the SID alert log file. > > we observerd that the OS service for oracle is not starting the > datbase server completely. > Just after hitting start icon inside the services panel it displays > the status "Starting", instead of "Started" > and database is not in the "Open" state. > > So the method adopted to start the database from services as well > using t...

Restore of database file and log file
Hi, I have a Sybase Adaptive Server Anywhere 7 database running. I have a database file that is dated june 12, 2003 I have the database log file from yesterday. The backupsoftware have not backed up the database file since june 12 because it did not back up open files. So, can i merge my database file frome june 12 and my logfile from yesterday and have all the changes since june 12 committed ? If, so how do i do it ? Can't see any tools in the Sybase Central that can help me out... This is a library database and my librarian is not happy at the moment..... Hiya, If the transaction log has not been truncated since, I'd say you can apply the log file. Otherwise you can force the log using dbtran and translate it to SQL and apply the SQL to the database. This is a bit risky, you may miss a lot of transactions since the last checkpoint until the time this log got created. Cheers, Willy -- Posted via http://dbforums.com ...

Writing a log file from an obey file
In message <51a5dbfe09news*@Torrens.org.uk> "Richard Torrens (News)" <News+15019@Torrens.org.uk> wrote: > Lua looks attractive, but it's a long learning curve for a single job! Here is the single job then: Write the following in a text file called "writelog": do local var,logfile,err = os.getenv,arg[1],"Cannot open " local log = assert(io.open(logfile,"a+"),err..logfile) log:write("\n", var "Sys$Time"," on ",var "Sys$Date","\n", var "cycle_No&...

Choosing log file destination in logging configuration file
Hey, I have a sort of petty-neurotic question, I'm kinda pedantic and like to keep my log directories clean, and this thing is bothering me to the point of actually posting a question (couldn't find any post about this..). My application has two types of components: daemons and regular ones that are run every on crontab. Because the log structure is very similar, I want both to use the same logging configuration file. For clarity, I want all of the files to have the date as part of the filename. The TimedRotatingFileHandler has it built in (once it rotates), and I found a dirty hack...

Lag Time Between Log File Switch and Archive Log File
Colleagues: Has anyone experienced lag times after an automatic log switch happens and the time the archive log file appears in the archive log directory? I am running Oracle9i (9.2) on Solaris/UNIX. For additional information, below are SQL results of time differences between when the log switch happened and when the archive log file was written to disk. I welcome all ideas at your earliest convenience on this and thanks in advance. Donna BYTES NEXT_TIME COMPLETION_TIME LAG ---------------- -------------------- -------------------- ---------- 1,048,568,320 JUN-13-2006 00:28:20 JUN-13-2006 01:45:54 77 1,048,573,440 JUN-13-2006 01:11:11 JUN-13-2006 01:46:27 35 1,048,574,976 JUN-13-2006 01:26:43 JUN-13-2006 01:47:06 20 1,048,574,976 JUN-13-2006 01:29:18 JUN-13-2006 01:47:40 18 1,048,574,976 JUN-13-2006 01:41:32 JUN-13-2006 01:48:12 6 1,048,574,976 JUN-13-2006 01:45:54 JUN-13-2006 01:48:28 2 1,048,572,928 JUN-13-2006 02:08:46 JUN-13-2006 03:07:04 58 1,048,574,976 JUN-13-2006 02:23:27 JUN-13-2006 03:07:50 44 1,048,569,856 JUN-13-2006 02:38:10 JUN-13-2006 03:08:19 30 1,048,566,784 JUN-13-2006 02:42:02 JUN-13-2006 03:09:01 26 1,048,571,392 JUN-13-2006 03:04:52 JUN-13-2006 03:09:33 4 1,048,570,368 JUN-13-2006 03:44:33 JUN-13-2006 07:48:41 244 347,355,136 JUN-13-2006 04:44:29 JUN-13-2006 07:47:54 18...

File size limit crossed when writing the logs to a file.
Hi all, I am invoking a 'C' program from a UNIX shell script and re-directing the output and error to a file. In this context i have a doubt regarding the file size. When i am redirecting the output to a file. Since all the files in UNIX have a specified upper limit for the file size. what happens if we are still trying to write the output to a file when the specified upper limit for the file size has crossed. In this case will the process which is writing logs to the file, gets killed or not. If it is killed, How do i handle this situation in the UNIX script. I forcast this situat...

write file + input file + write file
Hi, I want to write some latex command to a file (file1), input the file at the end of the document. These command shall write to another file (file2) and at a second run I use the file2. Whatever I have tried nothing works and I don't known why. The goal is to save a set of special commands and run them at the end of the document, because they use some values which are only available at the end! Can anybody please help me with that? Thanks in advance! I should be something like: \begin{document} \usepackage{style} \input{file2} %file exists from a previous run \end{document} style.sty: \newwrite\fileone \openout\fileone=file1.out \newwrite\filetwo \openout\filetwo=file2.out \newcommand{\writetofiletwo}[1]{% \write\filetwo{#1}% } \newcommand{\writetofileone}[1]{% \write\fileone{\writetofiletwo{some text}}% } \AtEndDocument{\closeout\fileone% \input{file1}% \closeout\filetwo% } Problem solved :-) \immediate\closeout before \input was required. greenux wrote: > Hi, > I want to write some latex command to a file (file1), input the file > at the end of the document. These command shall write to another file > (file2) and at a second run I use the file2. > Whatever I have tried nothing works and I don't known why. The goal is > to save a set of special commands and run them at the end of the > document, because they use some values which are only available at the > end! > Can anybody please help me with that? > Thanks in advance! &...

Data/Transaction log files for two databases have same file name
We have two db's. One live and one test. When I right click on the live one in SQL Enterprise Manager and select properties -> Data Files -> File Name is LIVE.MDF Location is F:\Data\LIVE.MDF When I right click on the test one in SQL Enterprise Manager and select properties -> Data Files -> File Name is LIVE.MDF Location is F:\Data\TEST.MDF Same thing applies to Transaction log files too. My concern is File Name is same in both the above cases even though the location is different. What are the consequences of this. Thanks for your help GVV Girish (kattukuyil@hotmail.co...

Issues with writing to server-side log file.
Hello, on the database server running Oracle 9.2, I want to write to a log file. I unsuccessfully tried the following: DECLARE f UTL_FILE.FILE_TYPE; BEGIN f := UTL_FILE.FOPEN('d:\temp', 'log.txt', 'w'); END; / It says, 'ORA-29280: invalid directory path' even if that folder exists and the permissions allow everyone to write to it. Someone told me to create a directory object so I can grant permissions to it, as follows: CREATE OR REPLACE DIRECTORY dirobj as 'd:\temp'; GRANT READ,WRITE ON DIRECTORY dirobj to snakason; CONN snakason/mypass@hedev2 DECLARE f UTL_FILE.FILE_TYPE; BEGIN f := UTL_FILE.FOPEN(dirobj, 'log.txt', 'r', 5000); END; / But now I'm getting this error "PLS-00201: identifier 'DIROBJ' must be declared". I'm guessing dirobj is not in scope of the PL/SQL block, but how to I remedy that? If I put those lines within the PL/SQL block, I get errors. Sorry, I'm an Oracle newbie so I don't what works or does not work in a PL/SQL block. On Feb 6, 2:55=A0pm, seannakasone <snaka...@flex.com> wrote: > Hello, on the database server running Oracle 9.2, I want to write to a lo= g > file. =A0I unsuccessfully tried the following: > > DECLARE > =A0 =A0 =A0 f UTL_FILE.FILE_TYPE; > BEGIN > =A0 =A0 =A0 f :=3D UTL_FILE.FOPEN('d:\temp', 'log.txt', 'w'); > END; > / ...

LOG FILES inonconfig file
Hi all, I am new to informix. I would like to know how to calculate the rootdbs 1) Is it (Logfiles x log size) + (PHYsical log) + some schemas ? 2) Can i have a small value of Log file log size and physical log during initialization and increase after the initialization by just restarting the server.( i will increase the rootdbs by adding a large chunk from other disk) 3) i have a part of roodbs on one LV (logicalvolume) and other on ther LV and VG also. 4)If i add a new chunk to rootdbs ,do i need to make any change to onconfig file Thanks. doitman wrote: > Hi all, > > I am new to informix. > > I would like to know how to calculate the rootdbs > > 1) Is it (Logfiles x log size) + (PHYsical log) + some schemas? Unless you want to place your logical and physical logs in one or more separate dbspaces - which is recommended BTW - that's about right. If you will be moving the logs after initialization to different dbspaces you only have to allocate space for the initial 6 logical logs and the default physical log. > 2) Can i have a small value of Log file log size and physical log > during initialization and increase after the initialization by just > restarting the server.( i will increase the rootdbs by adding a large > chunk from other disk) Yes, you need a minimum of 3 logical logs in order to initialize the server and the default/sample ONCONFIG file creates six for you if y...

Log Files and Results Files
I am seeking an opinion on what goes in each file in an automated test environment. I have two types of data: 1. Test Execution Outcomes, Case Data Values, etc. i.e. the outcome of the test and enough data to repeat the test. (quite a bit of data) 2. Detailed test execution data that would be used to investigate failures and unexpected incidents but that is not referred to if the tests pass. (Lots and lots of data) I have two files: a. Log file. b. Results file. What data set goes into what file? I am asking to help resolve a "discussion" in the office. Thanks Glenn ...

Krb5 servers writing to old rotated log files
Hi folks, On all of the Debian squeeze servers with Kerberos (v1.8.3) that I manage, I've noticed that the Kerberos daemons start out writing to their designated log files, e.g. kdc.log, but once those log files are rotated they ignore the new empty ones and instead prefer to write only to the first rotated files, e.g. kdc.log.1. This is the case for both the krb5kdc and kadmin daemons. If I restart the Kerberos daemons manually, they start writing to the correct log files. But, the next day, after the log files have been rotated, the new ones are empty and the daemons are still writing to the day-old ones. Has anyone else seen this behavior before? This is how my /etc/krb5.conf files are configured for logging: [logging] kdc = FILE:/var/log/krb5/kdc.log admin_server = FILE:/var/log/krb5/kadmin.log default = FILE:/var/log/krb5/klib.log I use this /etc/logrotate.d/krb5 file for rotating those log files: /var/log/krb5/kadmin.log /var/log/krb5/kdc.log /var/log/krb5/klib.log { daily missingok rotate 7 compress delaycompress notifempty } In my case, a consequence of using "notifempty" is that the Kerberos log files are not rotated regularly and grow a bit larger than expected. Am I missing something, have I made a mistake somewhere, or is this a bug? Thanks, Jaap ...

redo log file on Oracle 11g physical database
Parameter 'LOG_FILE_NAME_CONVERT' definition is: " Specify the location of the primary database online redo log files followed by the standby location. This parameter converts the path names of the primary database log files to the path names on the standby database. If the standby database is on the same system as the primary database or if the directory structure where the log files are located on the standby system is different from the primary system, then this parameter is required. Multiple pairs of paths may be specified by this parameter. " As I understand, if the redo log file directory and file name on physical site are the same as on primary site, I don't have to assign any value to this parameter. For example, if '/redo/redo1.log' is one of the redo log files on primary, at the time I create physical db, I just have to create this redo log file as '/redo/redo1.log' on physical site. Correct? Thanks! ...

How to write log file?
Hello, I am thinking to have a process write log file. I am very new to UNIX so I have concern that 1. is there a very simple way to append a line to a log file? 2. would it be safe whem multiple process write the same log file at same time? Thanks! Alex Shi Alex Shi <chpshi@eol.ca> wrote: > Hello, > > I am thinking to have a process write log file. I am very new to UNIX > so I have concern that First of all note that unix provides a facility for system logging, syslog. You can write to this facility using the command "logger" (at least at solaris and linux). But I am not sure if this what you want. > 1. is there a very simple way to append a line to a log file? If you want to append a line to a file in general you can do: echo simple line >> file > 2. would it be safe whem multiple process write the same log file at > same time? I'm note sure if there is a problem with using >> in the same file for multiple processes but I guess that in common cases you can do it without fear. > > Thanks! > > Alex Shi -- Kornilios Kourtis "The worst is yet to come" Thanks! Alex Shi > Alex Shi <chpshi@eol.ca> wrote: > > Hello, > > > > I am thinking to have a process write log file. I am very new to UNIX > > so I have concern that > > First of all note that unix provides a facility for system logging, > syslog. You can write to this facility using the comm...

writing to a
Hello all, I have been struggling with this command for a whole day. I dont know what I am doing wrong. Any help pls? I am trying to write the output of a batch file into a log file. When I do this c:\import\service\invoice.bat >> test.log it hangs there, it does not come back to the command prompt. I have to type in exit or quit to come back. Is there any way I can come back to the command prompt without typing in exit or quit from dos prompt. I did try typing quit and exit in the bat file. It does the same thing still. Thnx On 1 Nov 2006 06:49:17 -0800, "juy...

write to log file
Hi, I have a log file and of course I want to add the new information to the end of that log file. Unfortunately I always delete the old information and only append the current info. Right now I do it like this: I call a module which consists of a class and a function. It actually doesn't matter, but I just included it for the sake of completeness. The module looks like this: class log_C: def errorlog(self, filename, Data, d): LogFile = file(filename, 'w') ErrorInfo = Data[d+1] LogFile.write(ErrorInfo) LogFile.close() How can I add the new info ...

How to write debug info from CGI file into the standard log of iPlanet web server?
I would like to output debug information from cgi file into "errors" which is a log file for iPlanet web server. Like in JSP file, using application.log() which is method of the predefined object can output data to the webserver standard log. We don't need to open file handler. Is there any tcl package can do the same funcyionality? Thanks Johnson O.Yung wrote: > I would like to output debug information from cgi file into "errors" > which is a log file for iPlanet web server. Like in JSP file, using > application.log() which is method of the predefined object can output > data to the webserver standard log. We don't need to open file > handler. Is there any tcl package can do the same funcyionality? As a CGI is usually a separate process (in contrast to a jsp which runs inside the application servers process context), there is no way to do it without help from the web server. AOLserver would be an option, or other web servers that have tcl modules (Apache for example). You could hack up a simple NSAPI module (that can write to the default logs of the server) to embed or connect your Tcl interpreter into the iPlanet Server. Michael CGI stderr usually goes to the log file...no? -- David N. Welton Consulting: http://www.dedasys.com/ Personal: http://www.dedasys.com/davidw/ Free Software: http://www.dedasys.com/freesoftware/ Apache Tcl: http://tcl.apache.org/ Michael Schlenker <schlenk@uni-oldenburg.de> ...

Web resources about - log file sync vs log file parallel write - comp.databases.oracle.server

Parallel - Wikipedia, the free encyclopedia
Text is available under the Creative Commons Attribution-ShareAlike License ;additional terms may apply. By using this site, you agree to the ...

The parallel universe of Donald J. Trump
Within days of the launch of Donald J. Trump's presidential campaign last June, he entered a parallel universe of his unique creation – a universe ...

The parallel universe of Donald J. Trump
Within days of the launch of Donald J. Trump's presidential campaign last June, he entered a parallel universe of his unique creation &ndash; ...

Buy Parallels
Starting today, Parallels is offering a new bundle for a limited time that will let you get 81% off 7 great Mac apps with the purchase or upgrade ...

Explore parallel worlds in Abandoned: The Underground City
... through the scenes, pick up items, and obtain clues. Unlock different locations to explore and search for the secrets behind this hidden, parallel ...


Scandal over South African president draws Nixon parallel
JOHANNESBURG (AP) — Speaking in an old fort and prison from South Africa's era of white domination, a former anti-apartheid leader hinted that ...

Trust and Democracy: The Odd Parallels between Trump and Apple
... Apple in its fight against the FBI and Justice Department wouldn’t be caught dead voting for Trump. And yet Apple’s position has some odd parallels ...

Trade Policy Parallels
It’s not surprising at all, I think. Both Bernie and Trump belong to the rebels, the outsiders. The establishment on both sides is trying to ...

Parallels Updates Remote Application Server for PCs, Macs
Parallels Remote Application Server v15 brings Windows applications and desktops to employees on any device, anywhere in the world.

Resources last updated: 3/19/2016 9:59:04 PM