f



file locking on linux

Does anyone know if file locking on linux is still advisory.  I thought 
it was and then I'm reading this, so I had doubts.

http://unix.stackexchange.com/questions/85994/how-to-list-processes-
locking-file

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Two possibilities: lsof (my preference) or lslk (specifically for file 
locks):

[root@policyServer ~]# lslk | grep "master.lock"
SRC          PID   DEV  INUM   SZ TY M   ST WH  END LEN NAME
master      1650 253,0 12423   33  w 0    0  0    0   0 /var/lib/postfix/
master.lock

[root@policyServer ~]# lsof | grep "master.lock"
master     1650      root   10uW     REG              253,0       33      
12423 /var/lib/postfix/master.lock

Output of lslk is self-expanatory but lsof puts the lock description in 
the "FD" column (which is 10uW above). From the man page:

The mode character is followed by one of these lock characters, 
describing the type of lock applied to the file:

N for a Solaris NFS lock of unknown type;
r for read lock on part of the file;
R for a read lock on the entire file;
w for a write lock on part of the file;
W for a write lock on the entire file;
u for a read and write lock of any length;
U for a lock of unknown type;
x for an SCO OpenServer Xenix lock on part      of the file;
X for an SCO OpenServer Xenix lock on the      entire file;
                       space if there is no lock.


what would be the best way to implement file locking with C++?
0
Popping
11/16/2016 8:31:20 PM
comp.unix.programmer 10848 articles. 0 followers. kokososo56 (350) is leader. Post Follow

33 Replies
572 Views

Similar Articles

[PageSpeed] 12

On 2016-11-16, Popping mad <rainbow@colition.gov> wrote:
> Does anyone know if file locking on linux is still advisory. 

File locking conforming to the advisory API will always be advisory;
that's its defined semantics.

Does anyone know if the GNU C compiler's && operator is still
short-circuiting?
0
Kaz
11/16/2016 8:53:41 PM
Kaz Kylheku <221-501-9011@kylheku.com> writes:
>On 2016-11-16, Popping mad <rainbow@colition.gov> wrote:
>> Does anyone know if file locking on linux is still advisory. 
>
>File locking conforming to the advisory API will always be advisory;
>that's its defined semantics.
>
>Does anyone know if the GNU C compiler's && operator is still
>short-circuiting?

What do you mean by short-circuiting?   The operator is explicitly
defined in the language as "conditional and" in which the second
operand will (and must) not be evaluated if the first is true.

 if ((ptr != NULL) && (ptr->field == 0))
0
scott
11/16/2016 9:06:50 PM
On 2016-11-16, Scott Lurndal <scott@slp53.sl.home> wrote:
> Kaz Kylheku <221-501-9011@kylheku.com> writes:
>>On 2016-11-16, Popping mad <rainbow@colition.gov> wrote:
>>> Does anyone know if file locking on linux is still advisory. 
>>
>>File locking conforming to the advisory API will always be advisory;
>>that's its defined semantics.
>>
>>Does anyone know if the GNU C compiler's && operator is still
>>short-circuiting?
>
> What do you mean by short-circuiting?   The operator is explicitly
> defined in the language as "conditional and" in which the second
> operand will (and must) not be evaluated if the first is true.
>
>  if ((ptr != NULL) && (ptr->field == 0))

That is called short-circuiting.

OR can be modeled as two (or more) parallel switches in a circuit. If we
know that one of them is closed, we don't have to look at the others,
because they are all shorted out by the one which is closed.

(Going with the electrical analogy that closely, AND short-circuiting
should perhaps be called open-circuiting, though that calls for a
revision to a decades-old established jargon.)
0
Kaz
11/16/2016 9:15:00 PM
On 16/11/2016 22:06, Scott Lurndal wrote:

> Kaz Kylheku <221-501-9011@kylheku.com> writes:
>> On 2016-11-16, Popping mad <rainbow@colition.gov> wrote:
>>> Does anyone know if file locking on linux is still advisory. 
>>
>> File locking conforming to the advisory API will always be advisory;
>> that's its defined semantics.
>>
>> Does anyone know if the GNU C compiler's && operator is still
>> short-circuiting?
> 
> What do you mean by short-circuiting?   The operator is explicitly
> defined in the language as "conditional and" in which the second
> operand will (and must) not be evaluated if the first is true.
> 
>  if ((ptr != NULL) && (ptr->field == 0))

The way I understood it is:

OP asked if advisory file locking was still advisory, so Kaz
replied (tongue-in-cheek)
"is a short-circuiting operator still short-circuiting?".

Regards.

0
Noob
11/16/2016 9:19:45 PM
On 11/16/2016 04:19 PM, Noob wrote:
> On 16/11/2016 22:06, Scott Lurndal wrote:
> 
>> Kaz Kylheku <221-501-9011@kylheku.com> writes:
>>> On 2016-11-16, Popping mad <rainbow@colition.gov> wrote:
>>>> Does anyone know if file locking on linux is still advisory. 
>>>
>>> File locking conforming to the advisory API will always be advisory;
>>> that's its defined semantics.
>>>
>>> Does anyone know if the GNU C compiler's && operator is still
>>> short-circuiting?
>>
>> What do you mean by short-circuiting?   The operator is explicitly
>> defined in the language as "conditional and" in which the second
>> operand will (and must) not be evaluated if the first is true.
>>
>>  if ((ptr != NULL) && (ptr->field == 0))
> 
> The way I understood it is:
> 
> OP asked if advisory file locking was still advisory, so Kaz
> replied (tongue-in-cheek)
> "is a short-circuiting operator still short-circuiting?".
> 
> Regards.
> 


But that wasn't the question, FWIW:

Does anyone know if file locking on linux is still advisory.  I thought
it was and then I'm reading this, so I had doubts.

That was the question.  :shoulder shrug

0
ruben
11/16/2016 9:57:45 PM
Popping mad <rainbow@colition.gov> writes:
> Does anyone know if file locking on linux is still advisory.

Mandatory locking is also supported and a form of 'explcitily
coordinating possibly conflicting accesses' (=> fcntl(2)), 'Mandatory
Locking' and 'Leases').

But this is a bit risky as there are known races (=> fcntl(2), 'BUGS')
and also, because it locks file (aka i-nodes), not 'file names'. Eg,
another process could unlink the old file and create a new file with the
old name.
0
Rainer
11/16/2016 10:56:16 PM
On Wed, 16 Nov 2016 21:06:50 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
>Kaz Kylheku <221-501-9011@kylheku.com> writes:
>>On 2016-11-16, Popping mad <rainbow@colition.gov> wrote:
>>> Does anyone know if file locking on linux is still advisory. 
>>
>>File locking conforming to the advisory API will always be advisory;
>>that's its defined semantics.
>>
>>Does anyone know if the GNU C compiler's && operator is still
>>short-circuiting?
>
>What do you mean by short-circuiting?   The operator is explicitly
>defined in the language as "conditional and" in which the second
>operand will (and must) not be evaluated if the first is true.

ITYM if the first is false. With OR it would be true. XOR of course has to
evaluate both all the time.

-- 
Spud


0
spud
11/17/2016 9:40:05 AM
On 16.11.16 21.53, Kaz Kylheku wrote:
> Does anyone know if the GNU C compiler's && operator is still
> short-circuiting?

It never stopped to do so.

Optimization may introduce another execution order, but the results 
should be consistent as long as you do not use concurrent access to data.
And in the latter case you need synchronization or memory barriers 
rather than operator && because most platforms do not guarantee memory 
access strictly in order.


Marcel
0
Marcel
11/17/2016 9:44:54 AM
On 16.11.16 23.56, Rainer Weikusat wrote:
> But this is a bit risky as there are known races (=> fcntl(2), 'BUGS')
> and also, because it locks file (aka i-nodes), not 'file names'. Eg,
> another process could unlink the old file and create a new file with the
> old name.

Yeah, the reason why rotated log files never can be read reliably 
exactly once in order.


Marcel
0
Marcel
11/17/2016 9:47:43 AM
ruben safir <ruben@mrbrklyn.com> writes:

>Does anyone know if file locking on linux is still advisory.  I thought
>it was and then I'm reading this, so I had doubts.

Linux supports both advisory and mandatory file locking.
0
scott
11/17/2016 1:43:59 PM
On Thu, 17 Nov 2016 13:43:59 +0000, Scott Lurndal wrote:

> ruben safir <ruben@mrbrklyn.com> writes:
> 
>>Does anyone know if file locking on linux is still advisory.  I thought
>>it was and then I'm reading this, so I had doubts.
> 
> Linux supports both advisory and mandatory file locking.

how is that done is C++?
0
Popping
11/26/2016 5:42:21 PM
On Thu, 17 Nov 2016 10:47:43 +0100, Marcel Mueller wrote:

> read reliably exactly once in order.


what does that exactly mean?
0
Popping
11/26/2016 5:43:57 PM
On 26.11.16 18.43, Popping mad wrote:
> On Thu, 17 Nov 2016 10:47:43 +0100, Marcel Mueller wrote:
>
>> read reliably exactly once in order.
>
> what does that exactly mean?

You cannot write a program that reads each log entry exactly once 
because of a race condition. During log rotation you never know whether 
you need to continue reading in an old file or in the current one.
You could track the inode number but this will sadly fail if a backup is 
restored or the files are moved for some other reason. It will probably 
also fail for compressed old files because they likely change the inode 
number at compression.




Marcel
0
Marcel
11/26/2016 6:08:40 PM
Marcel Mueller , dans le message <o1cj38$mcd$1@gwaiyur.mb-net.net>, a
 �crit�:
> You cannot write a program that reads each log entry exactly once 
> because of a race condition. During log rotation you never know whether 
> you need to continue reading in an old file or in the current one.
> You could track the inode number but this will sadly fail if a backup is 
> restored or the files are moved for some other reason. It will probably 
> also fail for compressed old files because they likely change the inode 
> number at compression.

Blame whoever thought log rotation was a good idea in the first place.
0
Nicolas
11/26/2016 6:55:18 PM
On 26.11.16 19.55, Nicolas George wrote:
> Blame whoever thought log rotation was a good idea in the first place.

A more sophisticated approach would use a sortable time stamp of the 
first entry (or the starting point of a fixed time window) as file name 
extension.
This is human readable. You need no longer guess which logfile contains 
the interesting part.
And there is no need to rename logfiles anymore (and so no race 
condition), just delete the too old ones.

With this concept a file name with the time stamp and a ftell location 
provides a stable pointer to a specific log entry. If atomic appends are 
used (possible on many *ix OS) several concurrent write and read 
operations are safe without any further synchronization, even over the 
network. BTDTMT.

To keep compatibility with old software a link to the most recent file 
would do the job as well.


However, the binary logfiles of systemd are even worse.
A machine readable head for each log entry would have done the job as 
well. Then you can filter for units or priority etc. Also BTDTMT.


Marcel
0
Marcel
11/26/2016 11:31:11 PM
Marcel Mueller , dans le message <o1d5vv$5b5$1@gwaiyur.mb-net.net>, a
 �crit�:
> On 26.11.16 19.55, Nicolas George wrote:
>> Blame whoever thought log rotation was a good idea in the first place.
> 
> A more sophisticated approach would use a sortable time stamp of the 
> first entry (or the starting point of a fixed time window) as file name 
> extension.
> This is human readable. You need no longer guess which logfile contains 
> the interesting part.
> And there is no need to rename logfiles anymore (and so no race 
> condition), just delete the too old ones.
> 
> With this concept a file name with the time stamp and a ftell location 
> provides a stable pointer to a specific log entry. If atomic appends are 
> used (possible on many *ix OS) several concurrent write and read 
> operations are safe without any further synchronization, even over the 
> network. BTDTMT.
> 
> To keep compatibility with old software a link to the most recent file 
> would do the job as well.

Exactly.

> However, the binary logfiles of systemd are even worse.

I personally have not looked at the implementation details. Can you
explain in details what are the design flaws?
0
Nicolas
11/27/2016 8:47:49 AM
Marcel Mueller <news.5.maazl@spamgourmet.org> writes:
> On 26.11.16 18.43, Popping mad wrote:
>> On Thu, 17 Nov 2016 10:47:43 +0100, Marcel Mueller wrote:
>>
>>> read reliably exactly once in order.
>>
>> what does that exactly mean?
>
> You cannot write a program that reads each log entry exactly once
> because of a race condition. During log rotation you never know
> whether you need to continue reading in an old file or in the current
> one.

Uncoordinated, concurrent access to shared resources doesn't "work". If
that's a real problem, such a program could work with a copy of the
files or advisory locking on a shared 'lock file' could be used to
ensure that the files aren't rotated while they're being read.

OTOH, syslog is a central facility for collecting and storing diagnostic
output from background programs in order to facilitate post-mortem
analisys of software malfunctions. Abusing that for anything else, eg,
instead of a system audit log, or to facilitate poor men's process
management, is a bad idea.

Likewise, forcing the debugging facility into an audit log is also a bad
idea.
0
Rainer
11/27/2016 2:18:10 PM
On 27.11.16 09.47, Nicolas George wrote:
> Marcel Mueller , dans le message <o1d5vv$5b5$1@gwaiyur.mb-net.net>, a
>> However, the binary logfiles of systemd are even worse.
>
> I personally have not looked at the implementation details. Can you
> explain in details what are the design flaws?

They are binary.

You cannot read them easily with an arbitrary application over the 
network. You cannot read them from a rescue CD with an arbitrary OS that 
can just access the file system.
Log files are important in the emergency case and should be accessible 
as easily as possible => missed the point.


Marcel
0
Marcel
11/27/2016 5:32:49 PM
On 27.11.16 15.18, Rainer Weikusat wrote:
>> You cannot write a program that reads each log entry exactly once
>> because of a race condition. During log rotation you never know
>> whether you need to continue reading in an old file or in the current
>> one.
>
> Uncoordinated, concurrent access to shared resources doesn't "work". If
> that's a real problem, such a program could work with a copy of the
> files or advisory locking on a shared 'lock file' could be used to
> ensure that the files aren't rotated while they're being read.

AFAIK you cannot prevent renaming of a file with file locks. Locks are 
cooperative, they just prevent other file locks from succeeding.

But as for computation algorithms there are lock free solutions for 
concurrent access to files too. If files are only created with strictly 
monotonically sorted names and only appended at the end then concurrent 
reads can be implemented safely.
Depending on the file format a record end indicator may be required to 
identify incomplete records at the end.

In fact performance is not the only motivation for lock free algorithms. 
Another one is reliability. If two operations are not synchronized then 
they usually operate independently. So you cannot hit pitfalls like 
priority inversion or deadlocks.
With an ordinary reader writer lock a single reader could block all 
writers if it gets stuck while holding the lock. So when attaching a 
synchronized reading application the requirements for tests and 
reliability are much stronger than in case of a loosely coupled lock 
free reader.

> OTOH, syslog is a central facility for collecting and storing diagnostic
> output from background programs in order to facilitate post-mortem
> analisys of software malfunctions. Abusing that for anything else, eg,
> instead of a system audit log, or to facilitate poor men's process
> management, is a bad idea.

Indeed.
But sometimes it is useful to scan for some rare or critical events that 
only write log entries to create a service incident in case of a hit. 
This requires not to miss any entry as well as not to scan an entry 
twice and create phantom incidents.


> Likewise, forcing the debugging facility into an audit log is also a bad
> idea.

Log files are very useful for debugging too, especially to trace back in 
time. But one should not pollute the system's central log for this 
purpose. Instead a separate trace file should be used. (Someone should 
tell this the VDR authors.)


Marcel
0
Marcel
11/27/2016 5:58:45 PM
Marcel Mueller , dans le message <o1f5c1$n7k$1@gwaiyur.mb-net.net>, a
 �crit�:
> They are binary.
> 
> You cannot read them easily with an arbitrary application over the 
> network. You cannot read them from a rescue CD with an arbitrary OS that 
> can just access the file system.
> Log files are important in the emergency case and should be accessible 
> as easily as possible => missed the point.

To read log files, you need a driver for the disk controller, a driver
for the partition scheme, possibly a driver for the RAID layer, possibly
a driver for the encryption layer, a driver for the filesystem, possibly
a decompression program, and a pager.

The format of systemd's binary logs is documented and rather simple for
sequential dump. I would say reading it is simpler than any of these
other requirements (not counting the pager, it raises different kinds of
problems) except the partition scheme.

Can you explain how, in your estimation, one extra required small piece
of program would be a real issue in the middle of such a complex stack?
0
Nicolas
11/27/2016 6:03:02 PM
Popping mad <rainbow@colition.gov> writes:
>On Thu, 17 Nov 2016 13:43:59 +0000, Scott Lurndal wrote:
>
>> ruben safir <ruben@mrbrklyn.com> writes:
>> 
>>>Does anyone know if file locking on linux is still advisory.  I thought
>>>it was and then I'm reading this, so I had doubts.
>> 
>> Linux supports both advisory and mandatory file locking.
>
>how is that done is C++?

The same way it is done in C, of course.
0
scott
11/27/2016 6:30:27 PM
On Sun, 2016-11-27, Marcel Mueller wrote:
> On 27.11.16 15.18, Rainer Weikusat wrote:
....
>> OTOH, syslog is a central facility for collecting and storing diagnostic
>> output from background programs in order to facilitate post-mortem
>> analisys of software malfunctions. Abusing that for anything else, eg,
>> instead of a system audit log, or to facilitate poor men's process
>> management, is a bad idea.
>
> Indeed.
> But sometimes it is useful to scan for some rare or critical events that 
> only write log entries to create a service incident in case of a hit. 
> This requires not to miss any entry as well as not to scan an entry 
> twice and create phantom incidents.

I think that for most/all uses, a small chance of duplicates is
acceptable.  The same way I think people accept it for mail arriving
via SMTP.

/Jorgen

-- 
  // Jorgen Grahn <grahn@  Oo  o.   .     .
\X/     snipabacken.se>   O  o   .
0
Jorgen
11/27/2016 7:04:18 PM
On Sat, 26 Nov 2016 17:42:21 +0000, Popping mad wrote:

>>>Does anyone know if file locking on linux is still advisory.  I thought
>>>it was and then I'm reading this, so I had doubts.
>> 
>> Linux supports both advisory and mandatory file locking.
> 
> how is that done is C++?

How is what done? Locking or mandatory locking?

Locking is performed via fcntl(), lockf() or flock(). flock() is BSD, the
other two are SysV.

On Linux: the two types of lock don't interact (i.e. acquiring a lock via
fcntl() won't prevent another process from acquiring a lock for the same
file via flock(), and vice versa); lockf() is a wrapper around fcntl();
lockf/fcntl locks detect deadlock, flock locks don't.

Mandatory locking is described in

/usr/src/linux/Documentation/filesystems/mandatory-locking.txt

The short version is that mandatory locking requires the filesystem to be
mounted with the "mand" option, and the file in question must have the
setgid bit set and the execute bits cleared. Then, any locks applied with
fnctl() or lockf() (but not flock()) will be mandatory, i.e. they will
prevent other processes from reading or writing the locked region (whereas
advisory locking merely prevents other processes from acquiring
conflicting locks).

Mandatory locking is discouraged. It isn't specified by POSIX and programs
which aren't familiar with the concept of mandatory locking may not be
expecting the (non-standard) behaviour arising from attempting to access a
file subject to mandatory locks.

0
Nobody
12/1/2016 2:17:23 AM
Nobody <nobody@nowhere.invalid> writes:
> On Sat, 26 Nov 2016 17:42:21 +0000, Popping mad wrote:
>
>>>>Does anyone know if file locking on linux is still advisory.  I thought
>>>>it was and then I'm reading this, so I had doubts.
>>> 
>>> Linux supports both advisory and mandatory file locking.
>> 
>> how is that done is C++?
>
> How is what done? Locking or mandatory locking?

[...]

> Mandatory locking is discouraged. It isn't specified by POSIX and programs
> which aren't familiar with the concept of mandatory locking may not be
> expecting the (non-standard) behaviour arising from attempting to access a
> file subject to mandatory locks.

It's also not safe to use except if all programs accessing a file via a
certain pathname play nice with each other. Eg, the common file update
pattern of write to temporary file/ rename to final name will leave the
program which had the original with a mandatory lock referring to an
anonymous i-node (according to relevant documentation). But if all
programs were cooperating, they could have used advisory locking
instead.
0
Rainer
12/1/2016 2:46:50 PM
Marcel Mueller <news.5.maazl@spamgourmet.org> writes:
> On 27.11.16 15.18, Rainer Weikusat wrote:
>>> You cannot write a program that reads each log entry exactly once
>>> because of a race condition. During log rotation you never know
>>> whether you need to continue reading in an old file or in the current
>>> one.
>>
>> Uncoordinated, concurrent access to shared resources doesn't "work". If
>> that's a real problem, such a program could work with a copy of the
>> files or advisory locking on a shared 'lock file' could be used to
>> ensure that the files aren't rotated while they're being read.
>
> AFAIK you cannot prevent renaming of a file with file locks. Locks are
> cooperative, they just prevent other file locks from succeeding.

Which means a file lock (or something equivalent) can be used to ensure
that the logrotation program won't modify anything while a log reading
program is inspecting the data.

> But as for computation algorithms there are lock free solutions for
> concurrent access to files too. If files are only created with
> strictly monotonically sorted names and only appended at the end then
> concurrent reads can be implemented safely.

[...]

> In fact performance is not the only motivation for lock free
> algorithms.

The real motivation is usually people being afraid of locking for no
specific reason. Insofar certain tasks are CPU-bound, the may profit
from avoiding serialization so that they can instead go on with their
computations independntly of each other. OTOH, using a complicated,
'lock-free' algorithm just in case is IMHO a textbook example of
'premature optimization'.

For file system operations which always go through the kernel, the
performance argument becomes a bit bizarre.

[...]

> With an ordinary reader writer lock a single reader could block all
> writers if it gets stuck while holding the lock.
> So when attaching a synchronized reading application the requirements
> for tests and reliability are much stronger than in case of a loosely
> coupled lock free reader.

To me, this reasoning is roughly: I was afraid of something simple not
being implemented correctly, hence, I did something hideously
complicated instead, along the usual line of "make it so complicated
that is has no obvious bugs" ...

>> OTOH, syslog is a central facility for collecting and storing diagnostic
>> output from background programs in order to facilitate post-mortem
>> analisys of software malfunctions. Abusing that for anything else, eg,
>> instead of a system audit log, or to facilitate poor men's process
>> management, is a bad idea.
>
> Indeed. But sometimes it is useful to scan for some rare or critical
> events that only write log entries to create a service incident in
> case of a hit. This requires not to miss any entry as well as not to
> scan an entry twice and create phantom incidents.

Assuming that this is really a sensible choice and not just the effect
of "water running downhill, anyway" it should be possible to do this in
form a log receiver using a reliable transport (AF_UNIX or TCP) while
leaving the recorded information alone.

>> Likewise, forcing the debugging facility into an audit log is also a bad
>> idea.
>
> Log files are very useful for debugging too, especially to trace back
> in time. But one should not pollute the system's central log for this
> purpose. Instead a separate trace file should be used.

That's unfortunately entirely unworkable in practice: Real systems run
more than one program. And diagnostic output of more than one process
might need to be interpreted in context in order to determine what went
wrong.

OTOH, I already strongly suspected that "systemd logging" would just end
up enforcing the idea that "this is all too much fuss, let's just Write
Our Own Log File [Like All Real Men Always Did".
0
Rainer
12/1/2016 2:56:19 PM
Rainer Weikusat <rweikusat@talktalk.net> writes:

[...]

> OTOH, I already strongly suspected that "systemd logging" would just end
> up enforcing the idea that "this is all too much fuss, let's just Write
> Our Own Log File [Like All Real Men Always Did".

Additional moral: Justificions for bad idea change as fashion
dictates. But the bad ideas themselves live forever.

0
Rainer
12/1/2016 3:00:04 PM
On 11/30/2016 09:17 PM, Nobody wrote:
> Mandatory locking is discouraged. 


not if you can have multiple files talking to it at once.


0
ruben
12/5/2016 9:58:58 PM
On 01.12.16 15.56, Rainer Weikusat wrote:
> Marcel Mueller <news.5.maazl@spamgourmet.org> writes:
>> AFAIK you cannot prevent renaming of a file with file locks. Locks are
>> cooperative, they just prevent other file locks from succeeding.
>
> Which means a file lock (or something equivalent) can be used to ensure
> that the logrotation program won't modify anything while a log reading
> program is inspecting the data.

If and only if you can modify the logrotation program to check for the lock.


>> In fact performance is not the only motivation for lock free
>> algorithms.
>
> The real motivation is usually people being afraid of locking for no
> specific reason.

I think that's too simple. Of course, over engineering occurs from time 
to time. And also some lock free implementations are outperformed by 
classical mutex locking in almost all situations. (I remember the .NET 
ConcurrentDictionary in this case.)
But in other cases lock-free is not only faster but significantly 
/easier/ to implement. E.g. immutable data structures, like the already 
written part of a logfile, in conjunction with atomic replacement are a 
very easy and powerful. Locking many readers in contrast could be 
cumbersome.

> Insofar certain tasks are CPU-bound, the may profit
> from avoiding serialization so that they can instead go on with their
> computations independntly of each other.

Synchronization is no problem with 'normal' multi cores. But with NUMA 
it gets more expensive. Over a LAN it's even more expensive. And over 
WAN it is almost a show stopper. It is e.g. almost impossible to 
synchronize a distributed database with central locks. It would be 
slower than an Atari ST.

> OTOH, using a complicated,
> 'lock-free' algorithm just in case is IMHO a textbook example of
> 'premature optimization'.

Well, a B-Tree is complicated to implement too and almost everyone use 
it from time to time.

> For file system operations which always go through the kernel, the
> performance argument becomes a bit bizarre.

I would not do this for performance reasons.


>> Indeed. But sometimes it is useful to scan for some rare or critical
>> events that only write log entries to create a service incident in
>> case of a hit. This requires not to miss any entry as well as not to
>> scan an entry twice and create phantom incidents.
>
> Assuming that this is really a sensible choice and not just the effect
> of "water running downhill, anyway" it should be possible to do this in
> form a log receiver using a reliable transport (AF_UNIX or TCP) while
> leaving the recorded information alone.

Well, for syslogd there may be an API. But the same logging scheme is 
used by many applications too, e.g. with log4j or similar 
implementations in other languages. OK, this can be extended by a custom 
appender too. But adding custom code to an application usually voids 
warranty. So this is not an option.


>> Log files are very useful for debugging too, especially to trace back
>> in time. But one should not pollute the system's central log for this
>> purpose. Instead a separate trace file should be used.
>
> That's unfortunately entirely unworkable in practice: Real systems run
> more than one program. And diagnostic output of more than one process
> might need to be interpreted in context in order to determine what went
> wrong.

It depends on what kind of error you analyze. Precise time stamps could 
help for that purpose too.

> OTOH, I already strongly suspected that "systemd logging" would just end
> up enforcing the idea that "this is all too much fuss, let's just Write
> Our Own Log File [Like All Real Men Always Did".

Writing several Gigs through systemd is also not that recommended. And 
trace files tend to grow fast.
Furthermore systemd is not portable. It may be available on platform #1 
but not on the next one. Less dependencies is also a useful feature of 
an application with respect to maintenance.


Marcel
0
Marcel
12/5/2016 11:17:06 PM
ruben safir , dans le message <o24nv2$nqj$1@reader1.panix.com>, a
 �crit�:
> not if you can have multiple files talking to it at once.

Mandatory file locking is still discouraged in this case.
0
Nicolas
12/7/2016 1:44:11 PM
On 12/07/2016 08:44 AM, Nicolas George wrote:
>> not if you can have multiple files talking to it at once.
> Mandatory file locking is still discouraged in this case.

which is why it was included so that it wouldn't be used and instead you
get race conditions

0
ruben
12/11/2016 5:38:32 AM
On Sun, 27 Nov 2016 18:03:02 +0000, Nicolas George wrote:

> To read log files, you need a driver for the disk controller, a driver
> for the partition scheme, possibly a driver for the RAID layer, possibly
> a driver for the encryption layer, a driver for the filesystem, possibly
> a decompression program, and a pager.

red herring alert

fwiw, in the field you learn that they get corrupted and can't read them 
at all 
0
Popping
12/12/2016 3:28:22 PM
Popping mad , dans le message <o2mfmm$t7i$2@reader1.panix.com>, a
 �crit�:
> fwiw, in the field you learn that they get corrupted and can't read them 
> at all 

And THIS is the problem. Not the fact that they are binary.
0
Nicolas
12/12/2016 4:40:42 PM
Marcel Mueller <news.5.maazl@spamgourmet.org> writes:
> On 01.12.16 15.56, Rainer Weikusat wrote:
>> Marcel Mueller <news.5.maazl@spamgourmet.org> writes:
>>> AFAIK you cannot prevent renaming of a file with file locks. Locks are
>>> cooperative, they just prevent other file locks from succeeding.
>>
>> Which means a file lock (or something equivalent) can be used to ensure
>> that the logrotation program won't modify anything while a log reading
>> program is inspecting the data.
>
> If and only if you can modify the logrotation program to check for the
> lock.

If it's indeed a log rotation program, it could also be wrapped. Eg, on
some system which may host an arbitrary number of virtualized VPN
servers, I've replaced the logrotate program with a shell script which
acquires a write lock on a file with a well-known name and then invokes
the original program.

The effect of this is that there's only one lograte process running at
any given time despite all logrotates (one for the host and one for each
virtualized VPN server) are invoked at about the same time by the cron
processes of all these servers.
0
Rainer
12/13/2016 6:18:22 PM
Reply: