
I wrote a script to run specific actions conditional on input events informed by an event monitor, which looks something like
$ cat script.sh
-----------------------------------------------------------
#!/usr/bin/bash
stdbuf -oL /usr/bin/event_monitor | while IFS= read LINE
do
something with $LINE
done
When run as a script from a bash
terminal, the script consumes a normal amount of CPU, and executes the action only when a new line is printed. However, when run as a service with the following setup
$ cat event.service
-----------------------------------------------------------
[Unit]
Description=Actions upon events
[Service]
Type=simple
ExecStart=/path/to/script.sh
[Install]
WantedBy=default.target
The event_monitor
command now takes over an entire logical core, and strace
reveals that read
is read()
ing nothing as often as the processor allows:
$ strace -p $event_monitor_pid
-----------------------------------------------------------
read(0, "", 1) = 0
read(0, "", 1) = 0
read(0, "", 1) = 0
read(0, "", 1) = 0
read(0, "", 1) = 0
read(0, "", 1) = 0
read(0, "", 1) = 0
read(0, "", 1) = 0
read(0, "", 1) = 0
read(0, "", 1) = 0
read(0, "", 1) = 0
................ad nauseum
while the service still registers events and executes the conditional commands when real events do occur. What went wrong here?
p.s. this happens with cras_monitor
but not with acpi_listen
. I tried to ensure that the while
loop only starts after ensuring the underlying service successfully starts, but to no avail.
Update: here are some potentially relevant bits of event_monitor
's code:
...
#include <headers.h>
...
# Print to console function:
static void event_occurrence(void *context, int32_t attribute)
{
printf("Some attribute has changed to %d.\n", attribute);
}
...
int main(int argc, char **argv)
{
struct some_service_client *client # defined in headers
int rc
...
# Some routine
...
some_service_client_set_event_occurence_callback(client,event_occurence)
...
rc = some_func(client)
...
while (1) {
int rc;
char c;
rc = read(STDIN_FILENO, &c, 1);
if (rc < 0 || c == 'q')
return 0;
}
...
}
Antwort1
It is your event_monitor
program that is looping, using up all the CPU, not your bash script.
When run under systemd, STDIN has /dev/null attached (or perhaps it is even closed). When the event monitor loop in main does a read(2)
, it is getting EOF, and going around the loop again.
When run interactively, event_monitor has the terminal attached to stdin, so the read(2)
blocks until there is input.
event_monitor should only loop on reading stdin if it is open. If it receives EOF, it should either exit (probably not desirable in this case), or just sleep for a long time.
If you are unable to change event_monitor
, you may have some success attaching a FIFO (named pipe) to stdin of the service. systemd has the StandardInput
option (documented in the systemd.exec(5) man page), where you could specify StandardInput=file:/run/event_monitor_ctl
. Then you just need to create the /run/event_monitor_ctl
named pipe. For that, you can use systemd-tmpfiles by creating a config file (see tmpfiles.d(5)) to create that named pipe.