f



File size limit crossed when writing the logs to a file.

Hi all,

I am invoking a 'C' program from a UNIX shell script and re-directing
the output and error to a file.
In this context i have a doubt regarding the file size.

When i am redirecting the output to a file. Since all the files in
UNIX have a specified upper limit for the file size. what happens if
we are still trying to write the output to a file when the specified
upper limit for the file size has crossed.

In this case will the process which is writing logs to the file, gets
killed or not.
If it is killed, How do i handle this situation in the UNIX script.

I forcast this situation in my script. Please help me how i can
overcome this situation.

Thanks in advance,
Ravikanth
0
Ravikanth
9/25/2008 9:09:13 AM
comp.unix.shell 15484 articles. 3 followers. Post Follow

5 Replies
708 Views

Similar Articles

[PageSpeed] 7

Ravikanth <rkanth.vvn@gmail.com> writes:

> When i am redirecting the output to a file. Since all the files in
> UNIX have a specified upper limit for the file size.

What do you think the limit is?
And do you expect to exceed that limit?

If you plan to write a terabyte file, perhaps you need to think of
another approach that might make accessing it easier?

0
Maxwell
9/25/2008 11:05:39 AM
On Sep 25, 4:05=A0pm, Maxwell Lol <nos...@com.invalid> wrote:
> Ravikanth <rkanth....@gmail.com> writes:
> > When i am redirecting the output to a file. Since all the files in
> > UNIX have a specified upper limit for the file size.
>
> What do you think the limit is?
> And do you expect to exceed that limit?
>
> If you plan to write a terabyte file, perhaps you need to think of
> another approach that might make accessing it easier?

I am developing a client server based system which runs 24x7 for
months with very less down time.
Since i am still in the developing phase, i wanted to trace the
program execution flow when a client communicates with the server.
There is not one client but several clients communicating with the
server. So i used several 'printf' statements in the server code. and
i am tracking the server logs to verify how my system responds to the
client requests.

And now the log is in the server.
So there is every possibility for the log file to exceed its maximum
size.
My doubt here is if we are trying to write to a file which is filled
with info to the maximum extent, will the process which is writing to
that file get killed.
In that case my server process gets killed.

This is my application Maxwell.




0
Ravikanth
9/25/2008 11:25:44 AM
On Sep 25, 7:25=A0am, Ravikanth <rkanth....@gmail.com> wrote:
> On Sep 25, 4:05=A0pm, Maxwell Lol <nos...@com.invalid> wrote:
>
> > Ravikanth <rkanth....@gmail.com> writes:
> > > When i am redirecting the output to a file. Since all the files in
> > > UNIX have a specified upper limit for the file size.
>
> > What do you think the limit is?
> > And do you expect to exceed that limit?
>
> > If you plan to write a terabyte file, perhaps you need to think of
> > another approach that might make accessing it easier?
>
> I am developing a client server based system which runs 24x7 for
> months with very less down time.
> Since i am still in the developing phase, i wanted to trace the
> program execution flow when a client communicates with the server.
> There is not one client but several clients communicating with the
> server. So i used several 'printf' statements in the server code. and
> i am tracking the server logs to verify how my system responds to the
> client requests.
>
> And now the log is in the server.
> So there is every possibility for the log file to exceed its maximum
> size.
> My doubt here is if we are trying to write to a file which is filled
> with info to the maximum extent, will the process which is writing to
> that file get killed.
> In that case my server process gets killed.
>
> This is my application Maxwell.

Not answering your original question unfortunately, but in your
scenario I'd think it likely that you'd fill up a filesystem before
crossing a filesize boundary. In any case I think it's pretty standard
practice to log to multiple files (e.g. switching to a new file when a
certain size is reached, or by day) rather than a single big (and
never-ending) file.
0
shakahshakah
9/25/2008 12:53:13 PM
shakahshakah@gmail.com <shakahshakah@gmail.com> wrote:
> Not answering your original question unfortunately, but in your
> scenario I'd think it likely that you'd fill up a filesystem before
> crossing a filesize boundary. In any case I think it's pretty standard
> practice to log to multiple files (e.g. switching to a new file when a
> certain size is reached, or by day) rather than a single big (and
> never-ending) file.

There is a tool called logrotate for this purpose.

Mark.

-- 
Mark Hobley,
393 Quinton Road West,
Quinton, BIRMINGHAM.
B32 1QE.
0
markhobley
9/25/2008 4:16:54 PM
In article <bbf23cbb-5a5f-47a1-98aa-c67ac25202f0@k7g2000hsd.googlegroups.com>,
Ravikanth  <rkanth.vvn@gmail.com> wrote:
>Hi all,
>
>I am invoking a 'C' program from a UNIX shell script and re-directing
>the output and error to a file.
>In this context i have a doubt regarding the file size.
>
>When i am redirecting the output to a file. Since all the files in
>UNIX have a specified upper limit for the file size. what happens if
>we are still trying to write the output to a file when the specified
>upper limit for the file size has crossed.
>In this case will the process which is writing logs to the file, gets
>killed or not.
>If it is killed, How do i handle this situation in the UNIX script.

The process will get the signal SIGXFSZ, which will kill it by default.
You can trap/ignore it using a signal hander in your C program, or not, in
which case you can use the exit status to determine that it was SIGXFSZ that
killed the process and deal with it however you want.

Though it doesn't bear on your case, I tried trapping SIGXFSZ in shells.  I
find that this (mostly) works in ksh and bash, while the (old) zsh I tested got
stuck in a complaint-loop.  With a trap in place, in ksh a print/echo that
would exceed the file size limit returns success status even though it failed;
bash echo gives failure status.  Of course, the trap can do whatever it wants
to record the error.

	John
-- 
John DuBois  spcecdt@armory.com  KC6QKZ/AE  http://www.armory.com/~spcecdt/
0
spcecdt
9/25/2008 7:54:32 PM
Reply: