f



How to Decrease Priority of a Process Beyond What "nice" Will Do (Don't Want CPU Fan to Run)?

I have a Lenovo 11e (an elementary school computer) that I use as a Linux s=
erver.  Works great and only draws 6 watts, or < $10/year in power consumpt=
ion.

The CPU is a very low-end quad-core.

I have some maintenance crons that run for about 90 minutes solid once a we=
ek.  The machine is normally fanless (web requests won't normally make the =
fan run), except when these crons run the fan kicks in to medium speed.  Th=
e load number as displayed in "top" is usually between 1.0 and 1.4 while th=
e cron is running.

The cron in general is tar'ing up lots of files and calculating sha512 chec=
ksums on around 50,000 files.

I've tried "nice" on these crons but it doesn't have much effect because th=
ey aren't competing with anything else.  The machine is idle.

Is there any way to lower the priority of a process so that even when the C=
PUs are idle, it won't consume a full core?

If I wrote a C program (for example), I'd know how to do it -- just use sle=
ep() and microsleep().  But for an existing program written by others, I'm =
not sure if you can do this.

My goal is not have the fan run at all.
0
David
10/1/2016 4:51:21 AM
comp.unix.programmer 10848 articles. 0 followers. kokososo56 (350) is leader. Post Follow

16 Replies
653 Views

Similar Articles

[PageSpeed] 12

On Friday, September 30, 2016 at 9:51:25 PM UTC-7, David T. Ashley wrote:
....
> I have some maintenance crons that run for about 90 minutes solid once a =
week.
> The machine is normally fanless (web requests won't normally make the fan=
 run),
> except when these crons run the fan kicks in to medium speed.  The load n=
umber
> as displayed in "top" is usually between 1.0 and 1.4 while the cron is ru=
nning.
>=20
> The cron in general is tar'ing up lots of files and calculating sha512
> checksums on around 50,000 files.
....
> Is there any way to lower the priority of a process so that even when the
> CPUs are idle, it won't consume a full core?
....
> My goal is not have the fan run at all.

If that is indeed your goal, than by far the simplest and most complete sol=
ution is to under-clock the CPU(s), if that is supported by the hardware/BI=
OS.

Next best is to figure out how to have the OS under-clock the CPU(s), perha=
ps via boot-time arguments or sysctl (or /proc/*) settings.  (You don't men=
tion OS, so no specific advice can be given.)

If neither of those options are available or, whoops, you actually *did* wa=
nt it to run the fan when non-cron-based process was heavy enough to requir=
e it, then perhaps a small wrapper program that implemented a duty-cycle on=
 a child by sending SIGSTOP and SIGCONT would serve your purpose.  If it us=
ed process groups it would even work for (most) shell scripts or other mult=
i-process operations.


Philip Guenther
0
Philip
10/1/2016 8:47:57 AM
On Friday, 30 September 2016 21:51:25 UTC-7, David T. Ashley  wrote:
> I have a Lenovo 11e (an elementary school computer) that I use as a Linux=
 server.  Works great and only draws 6 watts, or < $10/year in power consum=
ption.
>=20
> The CPU is a very low-end quad-core.
>=20
> I have some maintenance crons that run for about 90 minutes solid once a =
week.  The machine is normally fanless (web requests won't normally make th=
e fan run), except when these crons run the fan kicks in to medium speed.  =
The load number as displayed in "top" is usually between 1.0 and 1.4 while =
the cron is running.
>=20
> The cron in general is tar'ing up lots of files and calculating sha512 ch=
ecksums on around 50,000 files.
>=20
> I've tried "nice" on these crons but it doesn't have much effect because =
they aren't competing with anything else.  The machine is idle.
>=20
> Is there any way to lower the priority of a process so that even when the=
 CPUs are idle, it won't consume a full core?
>=20
> If I wrote a C program (for example), I'd know how to do it -- just use s=
leep() and microsleep().  But for an existing program written by others, I'=
m not sure if you can do this.
>=20
> My goal is not have the fan run at all.

That is an awesome question - one we really need to ask more often.  It is =
such a small amount of energy you are looking at saving, we tend to ignore =
these issues.  Now if we were designing a system for going to Mars where ev=
ery rise in temperature needs to be accounted for.

The 'nice' utility modifies the priority of process, not the overall schedu=
ling coverage.

A tweak of the scheduler and looking at the different classes of process, i=
..e. real-time, batch, would be helpful.

Maybe we need a new utility called 'green' for running batch style, conserv=
ed and controllable CPU cycle/minute processes.  This would be helpful for =
admin'ing temperature sensitive, low-duty fanless embedded systems.

There you, repost question in a *nix embedded group, maybe one of the hardw=
are guys have come up with a utility.

Keep on pushing.
0
Todd
10/1/2016 4:32:16 PM
On 2016-10-01, David T. Ashley <dashley@gmail.com> wrote:
> My goal is not have the fan run at all.

Off-the-cuff idea 1: You can run the task (ironically) in the real-time domain but then
use the appropriate sysctl parameters to throttle the real-time
domain scheduling to only have a small percentage of the CPU.

Shooting-from-the-hip idea 2: you can have a process which monitors the
job and stop/starts it by delivering the SIGSTP and SIGCONT signals in a
loop. Effectively, you inject random sleeps into the task externally.
SIGCONT it for a second then SIGSTP for nine: max ~ 10% usage.
0
Kaz
10/1/2016 4:37:56 PM
On 01/10/2016 06:51, David T. Ashley wrote:

> I have a Lenovo 11e (an elementary school computer) that I use as a
> Linux server.  Works great and only draws 6 watts, or < $10/year in
> power consumption.
> 
> The CPU is a very low-end quad-core.
> 
> I have some maintenance crons that run for about 90 minutes solid
> once a week.  The machine is normally fanless (web requests won't
> normally make the fan run), except when these crons run the fan kicks
> in to medium speed.  The load number as displayed in "top" is usually
> between 1.0 and 1.4 while the cron is running.
> 
> The cron in general is tar'ing up lots of files and calculating
> sha512 checksums on around 50,000 files.
> 
> I've tried "nice" on these crons but it doesn't have much effect
> because they aren't competing with anything else.  The machine is
> idle.
> 
> Is there any way to lower the priority of a process so that even when
> the CPUs are idle, it won't consume a full core?
> 
> If I wrote a C program (for example), I'd know how to do it -- just
> use sleep() and microsleep().  But for an existing program written by
> others, I'm not sure if you can do this.
> 
> My goal is not have the fan run at all.

I think you'll want to take a look at this page:

http://blog.scoutapp.com/articles/2014/11/04/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups

It would appear cpulimit will fit your needs.

https://github.com/opsengine/cpulimit

Regards.

0
Noob
10/1/2016 4:52:59 PM
On 01/10/2016 18:37, Kaz Kylheku wrote:

> Shooting-from-the-hip idea 2: you can have a process which monitors the
> job and stop/starts it by delivering the SIGSTP and SIGCONT signals in a
> loop. Effectively, you inject random sleeps into the task externally.
> SIGCONT it for a second then SIGSTP for nine: max ~ 10% usage.

https://github.com/opsengine/cpulimit  ;-)

Regards.

0
Noob
10/1/2016 4:54:23 PM
On Sat, 2016-10-01, David T. Ashley wrote:
> I have a Lenovo 11e (an elementary school computer) that I use as a
> Linux server.  Works great and only draws 6 watts, or < $10/year in
> power consumption.
>
> The CPU is a very low-end quad-core.
>
> I have some maintenance crons that run for about 90 minutes solid
> once a week.  The machine is normally fanless (web requests won't
> normally make the fan run), except when these crons run the fan
> kicks in to medium speed.  The load number as displayed in "top" is
> usually between 1.0 and 1.4 while the cron is running.
>
> The cron in general is tar'ing up lots of files and calculating
> sha512 checksums on around 50,000 files.

A bit odd that that would show a high load: I'd expact that task to be
I/O bound, and waiting for disk read/write most of the time.  But my
disks are slow ...

> I've tried "nice" on these crons but it doesn't have much effect
> because they aren't competing with anything else.  The machine is
> idle.
>
> Is there any way to lower the priority of a process so that even
> when the CPUs are idle, it won't consume a full core?
>
> If I wrote a C program (for example), I'd know how to do it -- just
> use sleep() and microsleep().  But for an existing program written
> by others, I'm not sure if you can do this.
>
> My goal is not have the fan run at all.

There's also ionice(1).  Not sure it's of any use to you: you can say
"this process can only get disk access if noone else needs it".

But it still gets all the I/O of an otherwise idle system.

/Jorgen

-- 
  // Jorgen Grahn <grahn@  Oo  o.   .     .
\X/     snipabacken.se>   O  o   .
0
Jorgen
10/1/2016 5:11:47 PM
Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
> On Sat, 2016-10-01, David T. Ashley wrote:
>> I have a Lenovo 11e (an elementary school computer) that I use as a
>> Linux server.  Works great and only draws 6 watts, or < $10/year in
>> power consumption.
>>
>> The CPU is a very low-end quad-core.
>>
>> I have some maintenance crons that run for about 90 minutes solid
>> once a week.  The machine is normally fanless (web requests won't
>> normally make the fan run), except when these crons run the fan
>> kicks in to medium speed.  The load number as displayed in "top" is
>> usually between 1.0 and 1.4 while the cron is running.
>>
>> The cron in general is tar'ing up lots of files and calculating
>> sha512 checksums on around 50,000 files.
>
> A bit odd that that would show a high load: I'd expact that task to be
> I/O bound, and waiting for disk read/write most of the time.  But my
> disks are slow ...

Time spent waiting for disk I/O is counted towards the load average.

-- 
http://www.greenend.org.uk/rjk/
0
Richard
10/1/2016 6:01:36 PM
On Sat, 2016-10-01, Richard Kettlewell wrote:
> Jorgen Grahn <grahn+nntp@snipabacken.se> writes:
>> On Sat, 2016-10-01, David T. Ashley wrote:
>>> I have a Lenovo 11e (an elementary school computer) that I use as a
>>> Linux server.  Works great and only draws 6 watts, or < $10/year in
>>> power consumption.
>>>
>>> The CPU is a very low-end quad-core.
>>>
>>> I have some maintenance crons that run for about 90 minutes solid
>>> once a week.  The machine is normally fanless (web requests won't
>>> normally make the fan run), except when these crons run the fan
>>> kicks in to medium speed.  The load number as displayed in "top" is
>>> usually between 1.0 and 1.4 while the cron is running.
>>>
>>> The cron in general is tar'ing up lots of files and calculating
>>> sha512 checksums on around 50,000 files.
>>
>> A bit odd that that would show a high load: I'd expact that task to be
>> I/O bound, and waiting for disk read/write most of the time.  But my
>> disks are slow ...
>
> Time spent waiting for disk I/O is counted towards the load average.

Thanks!  You're right, of course, and the Linux uptime(1) man page
confirms it.

Now I need to adjust how I interpret the xload(1) graph.

/Jorgen

-- 
  // Jorgen Grahn <grahn@  Oo  o.   .     .
\X/     snipabacken.se>   O  o   .
0
Jorgen
10/2/2016 9:42:49 AM
Todd Saharchuk <todd.saharchuk@gmail.com> writes:
> Maybe we need a new utility called 'green' for running batch style,
> conserved and controllable CPU cycle/minute processes.  This would be
> helpful for admin'ing temperature sensitive, low-duty fanless embedded
> systems.

I’m not convinced this would actually save energy.  Received wisdom
(e.g. from people who’ve done Linux integration for laptops, where power
consumption is critical) is that the lowest energy cost is achieved by
completing the desired action as rapidly as possible and returning to a
low-power state.

-- 
http://www.greenend.org.uk/rjk/
0
Richard
10/2/2016 9:49:27 AM
On Sat, 01 Oct 2016 19:01:36 +0100
Richard Kettlewell <invalid@invalid.invalid> wrote:

> > A bit odd that that would show a high load: I'd expact that task to
> > be I/O bound, and waiting for disk read/write most of the time.
> > But my disks are slow ...
> 
> Time spent waiting for disk I/O is counted towards the load average.

There is a saying in the US army: They also serve who stand and wait.  

Did you mean to say, "On Linux, ...", or has that been true of Unix
utilities since time immemorial?  

I don't understand the rationale.  Is the idea that while a process is
waiting for the disk, the kernel is working in its stead?  Or is it
just a bookkeeping simplification?  

--jkl
0
James
10/3/2016 7:44:45 PM
"James K. Lowden" , dans le message
<20161003154445.9d9dafcca5f5565bbf891d35@speakeasy.net>, a �crit�:
> I don't understand the rationale.  Is the idea that while a process is
> waiting for the disk, the kernel is working in its stead?  Or is it
> just a bookkeeping simplification?  

The load average is about "load", not "CPU". Summarizing such a complex
notion as a single number requires simplifications. Even just CPU, with SMP
and hyperthreading, can not be summarized as a single number.
0
Nicolas
10/3/2016 7:59:55 PM
On Mon, 2016-10-03, James K. Lowden wrote:
> On Sat, 01 Oct 2016 19:01:36 +0100
> Richard Kettlewell <invalid@invalid.invalid> wrote:
>
>> > A bit odd that that would show a high load: I'd expact that task to
>> > be I/O bound, and waiting for disk read/write most of the time.
>> > But my disks are slow ...
>> 
>> Time spent waiting for disk I/O is counted towards the load average.
>
> There is a saying in the US army: They also serve who stand and wait.  
>
> Did you mean to say, "On Linux, ...", or has that been true of Unix
> utilities since time immemorial?  
>
> I don't understand the rationale.  Is the idea that while a process is
> waiting for the disk, the kernel is working in its stead?  Or is it
> just a bookkeeping simplification?  

"A process wants to do work, but is prevented from doing it."
Counting those processes seems to be a useful thing to do.

Although it's funny that you cannot use the loadavg to predict if
executing a second I/O-intensive process is a bad idea, or not.
And networking I/O load is, IIUC, not part of the loadavg at all.

/Jorgen

-- 
  // Jorgen Grahn <grahn@  Oo  o.   .     .
\X/     snipabacken.se>   O  o   .
0
Jorgen
10/3/2016 10:10:19 PM
On 10/3/2016 3:44 PM, James K. Lowden wrote:
> On Sat, 01 Oct 2016 19:01:36 +0100
> Richard Kettlewell <invalid@invalid.invalid> wrote:
>
>>> A bit odd that that would show a high load: I'd expact that task to
>>> be I/O bound, and waiting for disk read/write most of the time.
>>> But my disks are slow ...
>>
>> Time spent waiting for disk I/O is counted towards the load average.
>
> There is a saying in the US army: They also serve who stand and wait.

     <topicality level="sketchy at best">

     The `saying' is from 1674, a little more than a century before the
establishment of the first forebears of the U.S. Army.

https://en.wikipedia.org/wiki/When_I_Consider_How_My_Light_is_Spent
https://en.wikipedia.org/wiki/United_States_Army#History

     An alternative version, from (IIRC) "The Rime of the Ancient Surfer"
in MAD Magazine, runs "They also surf who only stand on waves."

     </topicality>

-- 
esosman@comcast-dot-net.invalid
The Tooth Fairy has gone high-tech, now pays in Bitecoin.
0
Eric
10/3/2016 10:10:32 PM
"James K. Lowden" <jklowden@speakeasy.net> writes:
> Richard Kettlewell <invalid@invalid.invalid> wrote:
>>> A bit odd that that would show a high load: I'd expact that task to
>>> be I/O bound, and waiting for disk read/write most of the time.
>>> But my disks are slow ...
>> 
>> Time spent waiting for disk I/O is counted towards the load average.
>
> There is a saying in the US army: They also serve who stand and wait.  
>
> Did you mean to say, "On Linux, ...", or has that been true of Unix
> utilities since time immemorial?  

AFAIK it at goes back at least to SunOS 4.

> I don't understand the rationale.  Is the idea that while a process is
> waiting for the disk, the kernel is working in its stead?  Or is it
> just a bookkeeping simplification?  

You’d have to ask whoever first designed it for the actual rationale...

My guess would be that the logic is that disk IO is as much a scarce
shared resource as CPU is, and heavy use of either leads to slow
completion of common tasks.

-- 
http://www.greenend.org.uk/rjk/
0
Richard
10/4/2016 9:18:39 AM
> I have a Lenovo 11e (an elementary school computer) that I use
> as a Linux server.  Works great and only draws 6 watts, or < $10/year
> in power consumption.

> The CPU is a very low-end quad-core.

A little Googling indicates that the processor is an Intel Celeron
processor, which has a Linux driver p4-clockmod that controls
processor speed by skipping clocks, with a suggestion that one of
the speedstep-lib drivers might be better.

> I have some maintenance crons that run for about 90 minutes solid
> once a week.  The machine is normally fanless (web requests won't
> normally make the fan run), except when these crons run the fan
> kicks in to medium speed.  The load number as displayed in "top"
> is usually between 1.0 and 1.4 while the cron is running.

> The cron in general is tar'ing up lots of files and calculating
> sha512 checksums on around 50,000 files.

> I've tried "nice" on these crons but it doesn't have much effect
> because they aren't competing with anything else.  The machine is
> idle.


I think "priority" is the wrong thing to try to adjust.  Human
managers always seem to have problems with the mistaken idea that
raising the priority of *EVERYTHING* gets work done faster.  It
doesn't work.  It especially doesn't work if there's only one thing
on the task list.

FreeBSD (and I believe Linux has something similar) has "real-time
priorities" and "idle priorities".  They have multiple levels.  A
process running at "idle priority" won't run unless there are no
non-idle-priority processes and no processes with higher idle
priority ready to run.  You can observe this by running an infinite
loop process at real-time priority, and observing that logging out
now takes forever.  You can also try running several infinite-loop
processes at equal idle priority and observe that they pretty much
divide the CPU time between themselves equally.  This won't work
for your purposes.

There are, however, other controls, maybe.  FreeBSD has a daemon
called "powerd" which implements a "power profile".  Linux has
cpufreqd which is similar.  The idea here is that CPU speed on some
CPUs can be reduced below maximum (which requires a hardware
capability to do this), and that it may be desirable to reduce it
to conserve (battery) power, especially on phones and laptops.
(Note:  Linux also has a "powerd" daemon which monitors a UPS and
shuts down the system before the battery in the UPS runs out - that
is *NOT* what I am referring to, and I'm not sure that this comes
with a Linux distribution; it may be an add-on supplied by a UPS
vendor.)  For some implementations, you can have separate power
profiles, one when the system is running on mains power and one
when it is running from battery.  It might be useful for your
purposes to modify the daemon so it always thinks it is on battery
power even if it isn't.

Phones and laptops sometimes control other things besides CPU speed
in a power profile:  they may turn on and off the display, vary the
intensity of the display, turn on and off Bluetooth and/or WiFi,
power down USB devices, etc.

A system has a thermal time constant.  If you alternate between
full speed and idle on alternate nanoseconds, the fan will probably
stay at the same speed (off, on, or low speed if it has one) all
the time.  If you alternate between full speed and idle on alternate
weeks, the fan will likely switch between off and full speed.
Somewhere in between, say, between a minute and an hour, you can
insert pauses and keep the fan off, say, insert a sleep(10 seconds)
between running a checksum on each file.  This assumes that there
aren't a few HUGE files that take long enough to checksum that the
fan comes on every time.

There are some load managers used on shared systems (say, a web
server hosting a few hundred virtual web sites) that try to divide
the CPU load equally between *SITES* (not processes).  I recall
hacking this into the FreeBSD scheduler, where a "site" was
distinguished by UID, for this purpose.  Now, if you could allocate
less than 100% total and have it run an idle process (which typically
runs a HALT instruction and waits for a time slice to expire or an
interrupt from a peripheral) the rest of the time, you might get
what you want.

> Is there any way to lower the priority of a process so that even
> when the CPUs are idle, it won't consume a full core?

Priority is the wrong term here.  Lower the CPU speed or insert
pauses.  Some systems can use adaptive CPU speed adjustment:  when
there is not much demand for the CPU, keep it slow, but if the
demand goes up, the speed can be automatically increased.

> If I wrote a C program (for example), I'd know how to do it -- 
> just use sleep() and microsleep().  But for an existing program 
> written by others, I'm not sure if you can do this.

I have heard of people playing tricks with ptrace(), where it might
be possible to set a breakpoint, say, at every system call, and
pause before resuming the process.  Linux, for example, provides
ptrace(PTRACE_SYSCALL, pid, ...) so you don't need to know where
the system-call code in the program is located.  

> My goal is not have the fan run at all.

0
gordonb
11/7/2016 1:39:42 AM
>> Maybe we need a new utility called 'green' for running batch style,
>> conserved and controllable CPU cycle/minute processes.  This would be
>> helpful for admin'ing temperature sensitive, low-duty fanless embedded
>> systems.
> 
> I’m not convinced this would actually save energy.  Received wisdom
> (e.g. from people who’ve done Linux integration for laptops, where power
> consumption is critical) is that the lowest energy cost is achieved by
> completing the desired action as rapidly as possible and returning to a
> low-power state.

Sometimes the goal is not minimum energy, but minimum peak energy.
This comes up when manufacturers cheap out on the fans, heat sinks,
etc., (and perhaps the room with the system in it has poor air
conditioning in the summer, and/or the case is loaded up with 8
disk drives that use a lot of power) and it is observed that using
4 cores at peak usage for more than 15 minutes causes a thermal
shutdown, but it can run all month with 3 cores going full speed.


0
Gordon
11/7/2016 1:48:53 AM
Reply: