f



log file sync vs log file parallel write probably not bug 2669566 #3

This is a continuation of  a previous thread about =91log file sync=92 and
=91log file parallel write=92 events.

Version       :  9.2.0.8
Platform      : Solaris
Application : Oracle Apps

The number of commits per second ranges between 10 and 30.

When querying statspack performance data the calculated average wait
time on  the event =91log file sync=92 is on average 10 times the wait
time for the =91log file parallel write=92  event.
Below just 2 samples where the ratio is even about 20.


"snap_time"	      " log file parallel write avg"	"log file sync
avg"	"ratio
11/05/2008 10:38:26    	8,142
156,343	               19.20
11/05/2008 10:08:23	        8,434
201,915	               23.94

So the wait time for a =91log file sync=92 is 10 times the wait time for a
=91log file parallel write=92.

First I thought that I was hitting bug 2669566.
But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper
tool.
And I think that it proves that I am NOT hitting this bug.

Below is a sample of  the output for the log writer.

--  End of snap 3
HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC                ,
DELTA, DELTA/SEC,  HDELTA, HDELTA/SEC
DATA, 4, 20081105 10:35:41, 30, STAT, messages sent               ,
1712,     57,        1.71k,       57.07
DATA, 4, 20081105 10:35:41, 30, STAT, messages received         ,
866,     29,         866,         28.87
DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts       ,
10,      0,           10,           .33
DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage                  ,
212820,   7094,      212.82k,   7.09k
DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching
time   ,       2,      0,            2,            .07
DATA, 4, 20081105 10:35:41, 30, STAT, redo
writes                     ,     867,     29,          867,
28.9
DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written          ,
33805,   1127,      33.81k,    1.13k
DATA, 4, 20081105 10:35:41, 30, STAT, redo write
time                ,     652,     22,          652,        21.73
DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message         ,
23431084, 781036,     23.43s,  781.04ms
DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel
write          , 6312957, 210432,     6.31s,   210.43ms
DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy  ,
18749,    625,      18.75ms, 624.97us

When adding the DELTA/SEC (which is in micro seconds) for the wait
events it always roughly adds up to a million micro seconds.
In the example above 781036 + 210432 =3D 991468 micro seconds.
This is the case for all the snaps taken by snapper.
So I think that the wait time for the =91log file parallel write time=92
must be more or less correct.

So I still have the question =93Why is the =91log file sync=92 about 10
times the time of the =91log file parallel write=92?=94

Any clues?
0
11/5/2008 1:47:12 PM
comp.databases.oracle.server 22978 articles. 1 followers. Post Follow

0 Replies
849 Views

Similar Articles

[PageSpeed] 49

Reply:

Similar Artilces:

log file sync vs log file parallel write probably not bug 2669566
This is a continuation of a previous thread about =91log file sync=92 and =91log file parallel write=92 events. Version : 9.2.0.8 Platform : Solaris Application : Oracle Apps The number of commits per second ranges between 10 and 30. When querying statspack performance data the calculated average wait time on the event =91log file sync=92 is on average 10 times the wait time for the =91log file parallel write=92 event. Below just 2 samples where the ratio is even about 20. "snap_time" " log file parallel write avg" "log file sync avg" "ratio 11/05/2008 10:38:26 8,142 156,343 19.20 11/05/2008 10:08:23 8,434 201,915 23.94 So the wait time for a =91log file sync=92 is 10 times the wait time for a =91log file parallel write=92. First I thought that I was hitting bug 2669566. But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper tool. And I think that it proves that I am NOT hitting this bug. Below is a sample of the output for the log writer. -- End of snap 3 HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07 DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87 DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, ...

log file sync vs log file parallel write probably not bug 2669566 #2
This is a continuation of a previous thread about =91log file sync=92 and =91log file parallel write=92 events. Version : 9.2.0.8 Platform : Solaris Application : Oracle Apps The number of commits per second ranges between 10 and 30. When querying statspack performance data the calculated average wait time on the event =91log file sync=92 is on average 10 times the wait time for the =91log file parallel write=92 event. Below just 2 samples where the ratio is even about 20. "snap_time" " log file parallel write avg" "log file sync avg" "ratio 11/05/2008 10:38:26 8,142 156,343 19.20 11/05/2008 10:08:23 8,434 201,915 23.94 So the wait time for a =91log file sync=92 is 10 times the wait time for a =91log file parallel write=92. First I thought that I was hitting bug 2669566. But then Jonathan Lewis is blog pointed me to Tanel Poder=92s snapper tool. And I think that it proves that I am NOT hitting this bug. Below is a sample of the output for the log writer. -- End of snap 3 HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07 DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87 DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, ...

log file sync vs log file parallel write
Hello all, While investigating performance problems in a database I discovered that the 'log file sync' wait event is almost always in the top 5. (I am looking at the statspack datat) Calculating the ratio of the average wait time of the 'log file sync' and the 'log file parallel write' gives a ratio of 10. Does someone know why this may happen? Regards Hans-Peter On 3 okt, 12:31, HansP <hans-peter.sl...@atosorigin.com> wrote: > Hello all, > > While investigating performance problems in a database I discovered > that the 'log file sync' wait event is almost always in the top 5. > (I am looking at the statspack datat) > > Calculating the ratio of the average wait time of the 'log file sync' > and the 'log file parallel write' =A0gives =A0a ratio of 10. > > Does someone know why this may happen? > > Regards Hans-Peter First of all, 'Log file sync' is associated with commit. If you see this event in the top 5, you are committing way too often (maybe for every individual record) and it is developer beating time!!! They don't know what a transaction is, and they need to be whipped for that. Secondly 'log file parallel write' obviously deals with flushing out the redo log buffer to disk. This means your disks are too slow, or you have an I/O bottleneck. Please note there is plethora of resources on 'log file sync' online, particularly in this very forum. Pl...

log file sync , log file parallel write
Hi , my top wait list shows 2 events , log file sync and log file parallel write in 1 hour. During this time , the db server created 370,594 bytes redo per second. the docs say that high value of both log file sync and log file parallel write indicates io problem . How can we decide whether below numbers are close to each other or not? If it is not possible to change the application and eliminate the unnecessary commits (such as sap) , how can lg file sync be decreased? Kind Regards Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 370,594.95 2,966.21 Logical reads: 39,925.48 319.56 Block changes: 2,447.17 19.59 Physical reads: 67.83 0.54 Physical writes: 433.47 3.47 User calls: 880.52 7.05 Parses: 275.49 2.21 Hard parses: 1.92 0.02 Sorts: 553.98 4.43 Logons: 0.29 0.00 Executes: 5,303.67 42.45 Transactions: 124.94 % Blocks changed per Read: 6.13 Recursive Call %: 89.20 Rollback per transaction %: 0.68 Rows per Sort: 11.52 Instance Efficiency Percentages (Tar...

Increase in 'log file sync' waits while 'log file parallel write' remains unchanged
We have small 9.2.0.8 database on AIX, from time to time it experiences waits on 'log file sync'. I am unable to explain these waits: - There is no increase in the number or size of transaction - While average wait on 'log file sync' increases from 10ms to 124ms average wait on 'log file parallel write' does not change much - approx 0.05 ms. NORMAL PERIOD: Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 30,016.82 1,384.01 Logical reads: 10,273.75 473.70 Block changes: 179.17 8.26 Physical reads: 5.36 0.25 Physical writes: 6.98 0.32 User calls: 139.45 6.43 Parses: 152.69 7.04 Hard parses: 0.11 0.01 Sorts: 81.40 3.75 Logons: 0.01 0.00 Executes: 587.69 27.10 Transactions: 21.69 PERIOD WITH WAITS ON LOG FILE SYNC Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 29,366.27 1,385.22 Logical reads: ...

Very high 'log file sync' wait time with no 'log file parallel write' wait time
Oracle 10.2.0.2 SE on Windows 2003 SP1 The following trace file section is from a very slow import session which is importing 9 Million rows into the database. COMMIT call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 0 0.00 0.00 0 0 0 0 Execute 179802 4.90 474.73 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 179802 4.90 474.73 0 0 0 0 Misses in library cache during parse: 0 Parsing user id: 21 (recursive depth: 1) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ log file sync 179077 0.53 468.19 ******************************************************************************** ***Snipped*** ******************************************************************************** OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 0 0.00 ...

Archive logs files accumulating on my Oracle log mode Database
>From what I understand is that an Oracle log mode Database is the recommended configuration for a production database. However, the problem we have is that Archive Logs files in the ..\RDBMS\ directory are accumulating daily which has implications when you have limited disk space. How should these Archive log files be maintained? ie when should they be deleted from the system? is there an automated procedure which will remove the log files on a weekly basis? I understand that there are two ways for performing a backup of the oracle database, 1) cold backup (requires the oracle database to be shutdown) 2) hot backup Should either of the above backup procedures automatically remove the archive log files once backup has been completed? Thanks in advance. James Osso wrote: >>From what I understand is that an Oracle log mode Database is the > recommended configuration for a production database. However, the Not for all production systems... Data ware houses usually are easier rebuilt than restored. Of course, apply common sense and *your* requirements. > problem we have is that Archive Logs files in the ..\RDBMS\ directory You can configure that - pfile, or spfile - depending on version. > are accumulating daily which has implications when you have limited > disk space. > > How should these Archive log files be maintained? ie when should they > be deleted from the system? is there an automated procedure which will > remove the log file...

Oracle database instance shutdown without logging in log file
oracle database instance shutdown without logging in log file. we have Oracle 10g(10.2.0.1)(HP DL380) databse server from Windows 2003 R2 for test/simulation purpose, we found that its shutting down the database sometimes unusally without any error/warning message in the SID alert log file. we observerd that the OS service for oracle is not starting the datbase server completely. Just after hitting start icon inside the services panel it displays the status "Starting", instead of "Started" and database is not in the "Open" state. So the method adopted to start the database from services as well using the "Startup" command from server manager utility. Why does this happen? On Apr 17, 12:19 pm, 7guru <7...@naver.com> wrote: > oracle database instance shutdown without logging in log file. > > we have Oracle 10g(10.2.0.1)(HP DL380) databse server from Windows > 2003 R2 for test/simulation purpose, > we found that its shutting down the database sometimes unusally > without any error/warning message in the SID alert log file. > > we observerd that the OS service for oracle is not starting the > datbase server completely. > Just after hitting start icon inside the services panel it displays > the status "Starting", instead of "Started" > and database is not in the "Open" state. > > So the method adopted to start the database from services as well > using t...

Writing a log file from an obey file
In message <51a5dbfe09news*@Torrens.org.uk> "Richard Torrens (News)" <News+15019@Torrens.org.uk> wrote: > Lua looks attractive, but it's a long learning curve for a single job! Here is the single job then: Write the following in a text file called "writelog": do local var,logfile,err = os.getenv,arg[1],"Cannot open " local log = assert(io.open(logfile,"a+"),err..logfile) log:write("\n", var "Sys$Time"," on ",var "Sys$Date","\n", var "cycle_No&...

Restore of database file and log file
Hi, I have a Sybase Adaptive Server Anywhere 7 database running. I have a database file that is dated june 12, 2003 I have the database log file from yesterday. The backupsoftware have not backed up the database file since june 12 because it did not back up open files. So, can i merge my database file frome june 12 and my logfile from yesterday and have all the changes since june 12 committed ? If, so how do i do it ? Can't see any tools in the Sybase Central that can help me out... This is a library database and my librarian is not happy at the moment..... Hiya, If the transaction log has not been truncated since, I'd say you can apply the log file. Otherwise you can force the log using dbtran and translate it to SQL and apply the SQL to the database. This is a bit risky, you may miss a lot of transactions since the last checkpoint until the time this log got created. Cheers, Willy -- Posted via http://dbforums.com ...

Choosing log file destination in logging configuration file
Hey, I have a sort of petty-neurotic question, I'm kinda pedantic and like to keep my log directories clean, and this thing is bothering me to the point of actually posting a question (couldn't find any post about this..). My application has two types of components: daemons and regular ones that are run every on crontab. Because the log structure is very similar, I want both to use the same logging configuration file. For clarity, I want all of the files to have the date as part of the filename. The TimedRotatingFileHandler has it built in (once it rotates), and I found a dirty hack...

Lag Time Between Log File Switch and Archive Log File
Colleagues: Has anyone experienced lag times after an automatic log switch happens and the time the archive log file appears in the archive log directory? I am running Oracle9i (9.2) on Solaris/UNIX. For additional information, below are SQL results of time differences between when the log switch happened and when the archive log file was written to disk. I welcome all ideas at your earliest convenience on this and thanks in advance. Donna BYTES NEXT_TIME COMPLETION_TIME LAG ---------------- -------------------- -------------------- ---------- 1,048,568,320 JUN-13-2006 00:28:20 JUN-13-2006 01:45:54 77 1,048,573,440 JUN-13-2006 01:11:11 JUN-13-2006 01:46:27 35 1,048,574,976 JUN-13-2006 01:26:43 JUN-13-2006 01:47:06 20 1,048,574,976 JUN-13-2006 01:29:18 JUN-13-2006 01:47:40 18 1,048,574,976 JUN-13-2006 01:41:32 JUN-13-2006 01:48:12 6 1,048,574,976 JUN-13-2006 01:45:54 JUN-13-2006 01:48:28 2 1,048,572,928 JUN-13-2006 02:08:46 JUN-13-2006 03:07:04 58 1,048,574,976 JUN-13-2006 02:23:27 JUN-13-2006 03:07:50 44 1,048,569,856 JUN-13-2006 02:38:10 JUN-13-2006 03:08:19 30 1,048,566,784 JUN-13-2006 02:42:02 JUN-13-2006 03:09:01 26 1,048,571,392 JUN-13-2006 03:04:52 JUN-13-2006 03:09:33 4 1,048,570,368 JUN-13-2006 03:44:33 JUN-13-2006 07:48:41 244 347,355,136 JUN-13-2006 04:44:29 JUN-13-2006 07:47:54 18...

File size limit crossed when writing the logs to a file.
Hi all, I am invoking a 'C' program from a UNIX shell script and re-directing the output and error to a file. In this context i have a doubt regarding the file size. When i am redirecting the output to a file. Since all the files in UNIX have a specified upper limit for the file size. what happens if we are still trying to write the output to a file when the specified upper limit for the file size has crossed. In this case will the process which is writing logs to the file, gets killed or not. If it is killed, How do i handle this situation in the UNIX script. I forcast this situat...

write file + input file + write file
Hi, I want to write some latex command to a file (file1), input the file at the end of the document. These command shall write to another file (file2) and at a second run I use the file2. Whatever I have tried nothing works and I don't known why. The goal is to save a set of special commands and run them at the end of the document, because they use some values which are only available at the end! Can anybody please help me with that? Thanks in advance! I should be something like: \begin{document} \usepackage{style} \input{file2} %file exists from a previous run \end{document} style.sty: \newwrite\fileone \openout\fileone=file1.out \newwrite\filetwo \openout\filetwo=file2.out \newcommand{\writetofiletwo}[1]{% \write\filetwo{#1}% } \newcommand{\writetofileone}[1]{% \write\fileone{\writetofiletwo{some text}}% } \AtEndDocument{\closeout\fileone% \input{file1}% \closeout\filetwo% } Problem solved :-) \immediate\closeout before \input was required. greenux wrote: > Hi, > I want to write some latex command to a file (file1), input the file > at the end of the document. These command shall write to another file > (file2) and at a second run I use the file2. > Whatever I have tried nothing works and I don't known why. The goal is > to save a set of special commands and run them at the end of the > document, because they use some values which are only available at the > end! > Can anybody please help me with that? > Thanks in advance! &...

Data/Transaction log files for two databases have same file name
We have two db's. One live and one test. When I right click on the live one in SQL Enterprise Manager and select properties -> Data Files -> File Name is LIVE.MDF Location is F:\Data\LIVE.MDF When I right click on the test one in SQL Enterprise Manager and select properties -> Data Files -> File Name is LIVE.MDF Location is F:\Data\TEST.MDF Same thing applies to Transaction log files too. My concern is File Name is same in both the above cases even though the location is different. What are the consequences of this. Thanks for your help GVV Girish (kattukuyil@hotmail.co...

Re: (1234*(2/3)^(Log[1234]/Log[3])) === (1234^(Log[2]/Log[3])) should be?
The result should be False. If you want True then you should use Equal ( == ) not SameQ ( === ) (1234*(2/3)^(Log[1234]/Log[3])) == (1234^(Log[2]/Log[3])) // Simplify True Bob Hanlon ---- Luka Rahne <luka.rahne@gmail.com> wrote: ============= what should be result of this evaluation? (1234*(2/3)^(Log[1234]/Log[3])) === (1234^(Log[2]/Log[3])) subqestion. How to make this work? ...

Issues with writing to server-side log file.
Hello, on the database server running Oracle 9.2, I want to write to a log file. I unsuccessfully tried the following: DECLARE f UTL_FILE.FILE_TYPE; BEGIN f := UTL_FILE.FOPEN('d:\temp', 'log.txt', 'w'); END; / It says, 'ORA-29280: invalid directory path' even if that folder exists and the permissions allow everyone to write to it. Someone told me to create a directory object so I can grant permissions to it, as follows: CREATE OR REPLACE DIRECTORY dirobj as 'd:\temp'; GRANT READ,WRITE ON DIRECTORY dirobj to snakason; CONN snakason/mypass@hedev2 DECLARE f UTL_FILE.FILE_TYPE; BEGIN f := UTL_FILE.FOPEN(dirobj, 'log.txt', 'r', 5000); END; / But now I'm getting this error "PLS-00201: identifier 'DIROBJ' must be declared". I'm guessing dirobj is not in scope of the PL/SQL block, but how to I remedy that? If I put those lines within the PL/SQL block, I get errors. Sorry, I'm an Oracle newbie so I don't what works or does not work in a PL/SQL block. On Feb 6, 2:55=A0pm, seannakasone <snaka...@flex.com> wrote: > Hello, on the database server running Oracle 9.2, I want to write to a lo= g > file. =A0I unsuccessfully tried the following: > > DECLARE > =A0 =A0 =A0 f UTL_FILE.FILE_TYPE; > BEGIN > =A0 =A0 =A0 f :=3D UTL_FILE.FOPEN('d:\temp', 'log.txt', 'w'); > END; > / ...

LOG FILES inonconfig file
Hi all, I am new to informix. I would like to know how to calculate the rootdbs 1) Is it (Logfiles x log size) + (PHYsical log) + some schemas ? 2) Can i have a small value of Log file log size and physical log during initialization and increase after the initialization by just restarting the server.( i will increase the rootdbs by adding a large chunk from other disk) 3) i have a part of roodbs on one LV (logicalvolume) and other on ther LV and VG also. 4)If i add a new chunk to rootdbs ,do i need to make any change to onconfig file Thanks. doitman wrote: > Hi all, > > I am new to informix. > > I would like to know how to calculate the rootdbs > > 1) Is it (Logfiles x log size) + (PHYsical log) + some schemas? Unless you want to place your logical and physical logs in one or more separate dbspaces - which is recommended BTW - that's about right. If you will be moving the logs after initialization to different dbspaces you only have to allocate space for the initial 6 logical logs and the default physical log. > 2) Can i have a small value of Log file log size and physical log > during initialization and increase after the initialization by just > restarting the server.( i will increase the rootdbs by adding a large > chunk from other disk) Yes, you need a minimum of 3 logical logs in order to initialize the server and the default/sample ONCONFIG file creates six for you if y...

Log Files and Results Files
I am seeking an opinion on what goes in each file in an automated test environment. I have two types of data: 1. Test Execution Outcomes, Case Data Values, etc. i.e. the outcome of the test and enough data to repeat the test. (quite a bit of data) 2. Detailed test execution data that would be used to investigate failures and unexpected incidents but that is not referred to if the tests pass. (Lots and lots of data) I have two files: a. Log file. b. Results file. What data set goes into what file? I am asking to help resolve a "discussion" in the office. Thanks Glenn ...

Re: Opening SAS Logs with color coding (.log files) #3
This is one of the many aspects in which my Notebook program beats the SAS Enhanced Editor. Get it free at http://sas2themax.com/free. Audi ----- Original Message ----- From: "Master Chief Petty Officer John-117" <rifazrazeek@GMAIL.COM> To: <SAS-L@LISTSERV.UGA.EDU> Sent: Wednesday, January 09, 2008 10:33 AM Subject: Opening SAS Logs with color coding (.log files) > Hi All, > > I got a bunch of .log files saved --- and need to go through them. > > the log is color coded when its generated (in interactive sas) -- > however when opened later... it's all black. > > is there a way to open the .log color coded either using sas itself or > using any other editor. > > many thanks ...

redo log file on Oracle 11g physical database
Parameter 'LOG_FILE_NAME_CONVERT' definition is: " Specify the location of the primary database online redo log files followed by the standby location. This parameter converts the path names of the primary database log files to the path names on the standby database. If the standby database is on the same system as the primary database or if the directory structure where the log files are located on the standby system is different from the primary system, then this parameter is required. Multiple pairs of paths may be specified by this parameter. " As I understand, if the redo log file directory and file name on physical site are the same as on primary site, I don't have to assign any value to this parameter. For example, if '/redo/redo1.log' is one of the redo log files on primary, at the time I create physical db, I just have to create this redo log file as '/redo/redo1.log' on physical site. Correct? Thanks! ...

Tuning 'log file sync' #3
One of our database experiences heavy waits on 'log file sync'. Top waits from V$SYSTEM_EVENT are given below. 'Log file sync' exceeds 'db file sequential read' and 'db file scattered read'. Metalink Note 223117.1 says that they key to understanding 'log file sync' is to compare average times waited for 'log file sync' and 'log file parallel write': if they are similar then it is I/O problem, but if they are different then "the delay is caused by the other parts of Redo Logging mechanism that occur during a COMMIT/ROLLBACK (and are not I/O-related)". How to find out what are these "other parts"? This is 9.2.0.5.0 on Tru64. Thanks Time Average Waited Wait Event (sec) (sec) ------------------------ ------- ------ SQL*Net message from c 9836478 .19 rdbms ipc message 798023 .30 PX Idle Wait 763886 2.15 enqueue 387408 .71 log file sync 237811 .21 pmon timer 190927 3.11 smon timer 186319 5.18 db file sequential read 85030 0 db file parallel write 57577 .20 local write wait 41371 .42 rdbms ipc reply 32636 .33 log file parallel write 27843 .03 latch free 8538 .01 ...

Krb5 servers writing to old rotated log files
Hi folks, On all of the Debian squeeze servers with Kerberos (v1.8.3) that I manage, I've noticed that the Kerberos daemons start out writing to their designated log files, e.g. kdc.log, but once those log files are rotated they ignore the new empty ones and instead prefer to write only to the first rotated files, e.g. kdc.log.1. This is the case for both the krb5kdc and kadmin daemons. If I restart the Kerberos daemons manually, they start writing to the correct log files. But, the next day, after the log files have been rotated, the new ones are empty and the daemons are still writing to the day-old ones. Has anyone else seen this behavior before? This is how my /etc/krb5.conf files are configured for logging: [logging] kdc = FILE:/var/log/krb5/kdc.log admin_server = FILE:/var/log/krb5/kadmin.log default = FILE:/var/log/krb5/klib.log I use this /etc/logrotate.d/krb5 file for rotating those log files: /var/log/krb5/kadmin.log /var/log/krb5/kdc.log /var/log/krb5/klib.log { daily missingok rotate 7 compress delaycompress notifempty } In my case, a consequence of using "notifempty" is that the Kerberos log files are not rotated regularly and grow a bit larger than expected. Am I missing something, have I made a mistake somewhere, or is this a bug? Thanks, Jaap ...

Re: Writing to a File and the Log in the Same Data Step #3
On Sun, 24 Jun 2007 01:38:53 -0400, Paul Dorfman <sashole@BELLSOUTH.NET> wrote: >Kevin, > >Sure, just issue 2 PUT statements: one for the log, the other for the external file, and of course multiple FILE statements. The key is to remember that FILE is an executable statement, with runtime effects. >as in >filename external temp recfm=n ; > >data _null_ ; > input printvar ; > file log ; > put printvar z10. ; > file external ; > put printvar z10. ; >cards ; >1 >2 >3 >run ; > >Kind regards >-------------------- >Paul Dorfman >Jax, FL >-------------------- > >> From: Kevin Myers <kmyers1@CLEARWIRE.NET> >> Date: 2007/06/23 Sat PM 08:08:49 EDT >> To: SAS-L@LISTSERV.UGA.EDU >> Subject: Writing to a File and the Log in the Same Data Step >> >> I seem to be feeling rather dense this afternoon... >> >> In a single data step, I want to write output to both an external file and to the SAS system log. Is there any way of doing this *without* using a file statement with the filevar option? >> >> I also need to use recfm=n for writing to the external (binary) file, which isn't appropriate for writing to the log... >> >> Thanks, >> Kevin M. >> ...

Web resources about - log file sync vs log file parallel write probably not bug 2669566 #3 - comp.databases.oracle.server

Parallel - Wikipedia, the free encyclopedia
Text is available under the Creative Commons Attribution-ShareAlike License ;additional terms may apply. By using this site, you agree to the ...

The parallel universe of Donald J. Trump
Within days of the launch of Donald J. Trump's presidential campaign last June, he entered a parallel universe of his unique creation – a universe ...

The parallel universe of Donald J. Trump
Within days of the launch of Donald J. Trump's presidential campaign last June, he entered a parallel universe of his unique creation &ndash; ...

Buy Parallels
Starting today, Parallels is offering a new bundle for a limited time that will let you get 81% off 7 great Mac apps with the purchase or upgrade ...

Explore parallel worlds in Abandoned: The Underground City
... through the scenes, pick up items, and obtain clues. Unlock different locations to explore and search for the secrets behind this hidden, parallel ...


Scandal over South African president draws Nixon parallel
JOHANNESBURG (AP) — Speaking in an old fort and prison from South Africa's era of white domination, a former anti-apartheid leader hinted that ...

Trust and Democracy: The Odd Parallels between Trump and Apple
... Apple in its fight against the FBI and Justice Department wouldn’t be caught dead voting for Trump. And yet Apple’s position has some odd parallels ...

Trade Policy Parallels
It’s not surprising at all, I think. Both Bernie and Trump belong to the rebels, the outsiders. The establishment on both sides is trying to ...

Parallels Updates Remote Application Server for PCs, Macs
Parallels Remote Application Server v15 brings Windows applications and desktops to employees on any device, anywhere in the world.

Resources last updated: 3/19/2016 9:52:53 PM