Forbid concurrent runs of a process
Suppose I have some hypothetical CLI program. It is important that only one instance of this program runs at any given time. When multiple instances run concurrently, bad things happen.
Normally, the developer of the program should add some mechanism to prevent this, but let's say this one didn't and I'm not able to develop such a feature myself. The problem is vulnerable to the unexpected concurrent runs issue if no steps are taken by the user (me).
Is there a general mechanism in Linux for saying "don't let more than 1 instance of this program to run from now on"?
I am expecting that there are actual many ways to do it. I suggest posting one method per answer, and let the voters decide how to sort them.
2 answers
This can be done using the flock
utility.
The most useful mode for preventing multiple invocations of a same process is likely to be -en
(exclusive lock, no wait). You need a file or directory (yes, a directory works!) on which to lock which is shared across instances that must not run simultaneously.
Borrowing my example from an earlier answer, since fetchmail
doesn't handle well polling the same account at the same time from different processes, you can lock on its configuration file by doing something similar to:
fetchmailrc=~/.fetchmailrc.d/some-account.conf
flock -en $fetchmailrc fetchmail -f $fetchmailrc
If this is run twice in parallel, the first invocation will obtain a lock on the configuration file (as named as an argument to flock
) and start fetchmail
(which in turn is also passed the name of the configuration file in the example above, but that need not be the case); whereas in the second invocation flock
will fail to get an exclusive lock on the file because an exclusive lock is already being held, and as a consequence exit very quickly and without starting the fetchmail
process. In the above example, this ensures that only one fetchmail
process is running at any one time using the same configuration file iff all fetchmail
invocations are done through such a wrapper.
One big advantage over many other alternatives is that lock acquisition is atomic; flock
will either get the lock and know that it successfully acquired the lock, or fail to get the lock and know that it failed to acquire the lock. It's therefore extremely unlikely that any race condition or time-of-check-to-time-of-use issue can arise where a process fails to get the lock without detecting that failure.
The flock(1) man page has further discussion on options available, including non-exclusive locks and timed waits for lock acquisition.
Since this uses file locks effectively as semaphores, no explicit cleanup is required. In the example above, when fetchmail
exits for any reason, flock
will release the lock; and if flock
dies (taking fetchmail
with it), the operating system will release the lock held as part of process termination cleanup; and if there is a hard system shutdown or reboot, that will clear all file locks. This is in contrast to making a lock file yourself which will need to somehow be cleaned up in case the process exits unexpectedly or uncleanly, including detecting a stale left-over lock file which can be non-trivial especially in the general case. The downside is that the file on which the lock is held must be on a file system which supports file locking; this can be a problem if you are in a NFS or CIFS environment, for example.
0 comment threads
Make a lock file or env variable and alias the CLI command to test for the lock before it starts the real command.
0 comment threads