Open daveespo opened 12 years ago
the strategy lockfile uses is not arbitrary, it was tuned over years running on various shared file systems in production environments. in summary, shared file systems lie, cache inodes, and generally cannot be trusted. the strategy is puncuated sawtooth pattern, 'try hard in rapid succession', back off incrementally (become more patient), but eventually become impatient again...
it's very important to have multiple tries and a random access pattern in large distributed file systems which is the only reason you'd use lockfile. otherwise you'd just use DATA.flock(File::LOCK_EX | File::LOCK_NB )
Jeez ... my bad ... We loved the interface (cleaning up stale lockfiles, etc.) so that's why we use Lockfile on the local filesystem
In the short term (until we have time to port over to flock instead), would you agree that it's a safe approximation to set poll_retries to zero?
yes. i'd accept a policy pull request with a local vs shared settings approach. local could be the default....
I was a little surprised to see that the default for poll_retries is 16 which means that it takes, generally, over a second to give up on creating a Lockfile if one already exists. Why isn't it zero (or maybe more precisely, why is this polling step even necessary? Isn't that the purpose of the :retries argument?)