Closed SteveLauC closed 2 months ago
Well, I just realized that we are using std::sync::Mutex
ONLY for FORK_MTX
, so the problem described in this issue only applies to it.
Another update, found that we switched to parking_lot
just for the feature of no poison in #1599, and in #2107, we reverted that change for FORK_MTX
, is this a mistake or it was done on purpose?
Another update, found that we switched to
parking_lot
just for the feature of no poison in #1599, and in #2107, we reverted that change forFORK_MTX
, is this a mistake or it was done on purpose?
No, I think it was accidental to switch from parking_lot back to std in #2107 .
Ok, I will change it back tomorrow
We have some process-wide Mutex locks in
test.rs
to ensure tests that need to modify global states will be executed sequentially:https://github.com/nix-rust/nix/blob/a7db481a2f5e43cd5c8f2c5a3d35118ebcfa77da/test/test.rs#L59-L74
This should work well if the corresponding test succeeds, but once a test holding the lock failed, the thread running that test panicked, the lock got poisoned, then other tests (threads) cannot acquire the lock, and since we don't check if the lock acquisition is successful in our tests, under such scenario, those global locks won't work at all.
https://github.com/nix-rust/nix/blob/a7db481a2f5e43cd5c8f2c5a3d35118ebcfa77da/test/test_unistd.rs#L43
Given that those global locks do not work under the above case, we can get some unexpected results from our test, e.g., see these 2 failures here, they want to wait for the child process created by themselves, but got the children of others:
In my mind, I am thinking about calling
.unwrap()
onlock()
so that the tests that cannot acquire the lock will panic explicitly, which would make things clearer, though I cannot say it is generally a better approach. cc @asomers