Closed Phoenix500526 closed 4 days ago
@Phoenix500526 Convert your pr to draft since CI failed
@Phoenix500526 You've modified the workflows. Please don't forget to update the .mergify.yml.
Attention: Patch coverage is 70.04831%
with 62 lines
in your changes missing coverage. Please review.
Project coverage is 75.57%. Comparing base (
e35b35a
) to head (09f1df9
). Report is 132 commits behind head on master.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@Phoenix500526 Convert your pr to draft since CI failed
@Phoenix500526 Your PR is in conflict and cannot be merged.
@Phoenix500526 Your PR is in conflict and cannot be merged.
Mutex and its guard may be not the best way to abstract. If there's any error happens, such as lease renew failure, lock free is a good point to report error. In the mutex guard abstraction, lock free is in the drop function, which is not possible to handle for the caller.
But if you don't provide an RAII implementation for the lock, users may be bothered by the fact that they may forget to free a lock.
Mutex and its guard may be not the best way to abstract. If there's any error happens, such as lease renew failure, lock free is a good point to report error. In the mutex guard abstraction, lock free is in the drop function, which is not possible to handle for the caller.
But if you don't provide an RAII implementation for the lock, users may be bothered by the fact that they may forget to free a lock.
No, they won't if you put operations dealing with the locked data in a closure and release the lock while leaving the closure. For example providing the following code:
let return_value = a_xutex.map_lock( |xutex_guard| {
/// dealing with the guard
});
The return_value can tell if there's any lock related issue during the closure.
Additionally if you follow this way, the life time bound of the guard is not necessary as there's only one way to get the guard and drop it.
@Phoenix500526 Convert your pr to draft since CI failed
@Phoenix500526 Your PR is in conflict and cannot be merged.
@Phoenix500526 Convert your pr to draft since CI failed
We could guarantee the lock safety by coupling the lock key to every update send to Xline, and the Xline server must verify the validity of the key. Please refer to https://jepsen.io/analyses/etcd-3.4.3
On the client side, we could associate KV operation methods to the lock guard to prevent user from using the lock for other purpose.
@Mergifyio rebase
rebase
Please briefly answer these questions: Close the issue #664 and #684
what problem are you trying to solve? (or if there's no problem, what's the motivation for this change?) Close the issue #664 and #684
what changes does this pull request make? Implement a session structure to auto renew the lock lease. Implement an
Xutex
, which means Xline Mutex, to describe a lock instance Provide an RAII implementationXutexGuard
forXutex
Remove the LockRequest and UnlockRequest in xline-client Remove some unless test cases, likelock_should_timeout_when_ttl_is_set
. Actually, whether the ttl is set or not, the lock in etcd won't timeout. The ttl of a lock is only used to liveness checking. FYI: https://github.com/etcd-io/etcd/issues/6736 Remove the validation_lock_client.rsare there any non-obvious implications of these changes? (does it break compatibility with previous versions, etc) no