Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

75%
+4 −0
Q&A Forbid concurrent runs of a process

This can be done using the flock utility. The most useful mode for preventing multiple invocations of a same process is likely to be -en (exclusive lock, no wait). You need a file or directory (ye...

posted 4mo ago by Canina‭

Answer
#1: Initial revision by user avatar Canina‭ · 2024-07-24T09:37:52Z (4 months ago)
This can be done using the `flock` utility.

The most useful mode for preventing multiple invocations of a same process is likely to be `-en` (**e**xclusive lock, **n**o wait). You need a file or directory (yes, a directory works!) on which to lock which is shared across instances that must not run simultaneously.

Borrowing my example from [an earlier answer](https://linux.codidact.com/posts/292086/292091#answer-292091), since `fetchmail` doesn't handle well polling the same account at the same time from different processes, you can lock on its configuration file by doing something similar to:

    fetchmailrc=~/.fetchmailrc.d/some-account.conf
    flock -en $fetchmailrc fetchmail -f $fetchmailrc

If this is run twice in parallel, the first invocation will obtain a lock on the configuration file (as named as an argument to `flock`) and start `fetchmail` (which in turn is also passed the name of the configuration file in the example above, but that need not be the case); whereas in the second invocation `flock` will fail to get an exclusive lock on the file because an exclusive lock is already being held, and as a consequence exit very quickly and without starting the `fetchmail` process. In the above example, this ensures that only one `fetchmail` process is running at any one time using the same configuration file ***iff*** all `fetchmail` invocations are done through such a wrapper.

One big advantage over many other alternatives is that lock acquisition is atomic; `flock` will either get the lock and know that it successfully acquired the lock, or fail to get the lock and know that it failed to acquire the lock. It's therefore extremely unlikely that any race condition or time-of-check-to-time-of-use issue can arise where a process fails to get the lock without detecting that failure.

The flock(1) man page has further discussion on options available, including non-exclusive locks and timed waits for lock acquisition.

Since this uses file locks effectively as semaphores, no explicit cleanup is required. In the example above, when `fetchmail` exits for any reason, `flock` will release the lock; and if `flock` dies (taking `fetchmail` with it), the operating system will release the lock held as part of process termination cleanup; and if there is a hard system shutdown or reboot, that will clear all file locks. This is in contrast to making a lock file yourself which will need to somehow be cleaned up in case the process exits unexpectedly or uncleanly, including detecting a stale left-over lock file which can be non-trivial especially in the general case. The downside is that the file on which the lock is held must be on a file system which supports file locking; this can be a problem if you are in a NFS or CIFS environment, for example.