JuliaParallel / DistributedNext.jl

Bleeding-edge fork of Distributed.jl
http://juliaparallel.org/DistributedNext.jl/
MIT License
7 stars 1 forks source link

allow initiating peer to close stream gracefully #5

Open simeonschaub opened 2 weeks ago

simeonschaub commented 2 weeks ago

I have a use case where I would like to remove a worker without killing it. Currently, trying to disconnect from the head node will cause the message handler loop to throw a fatal exception, so this adds a check that the connection is still open when trying to read new messages.

The test case is taken from https://github.com/simeonschaub/PersistentWorkers.jl, which I'd eventually like to register once this is merged.

codecov-commenter commented 2 weeks ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 25.73%. Comparing base (76df474) to head (719be61).

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #5 +/- ## =========================================== - Coverage 86.20% 25.73% -60.48% =========================================== Files 10 10 Lines 1965 1916 -49 =========================================== - Hits 1694 493 -1201 - Misses 271 1423 +1152 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

JamesWrigley commented 2 weeks ago

I'm surprised at the seemingly-random CI failures :thinking: Does this also happen with Distributed?

simeonschaub commented 2 weeks ago

Hmm, I had hard-to-reproduce failures with these tests before, but I thought I fixed everything. When I use persistent workers interactively at least I don't encounter any of these issues, so it seems to have something to do with the worker process being spawned from another Julia process like this.

Can you think of any way to test this more directly? I am almost tempted to go ahead with this without any test as the change itself is really quite simple but that of course makes it hard to ensure this functionality does not regress

JamesWrigley commented 2 weeks ago

I'm a bit hesitant to merge it directly given how common the failures are :sweat_smile: I'll try running the tests locally and see if I can reproduce them or come up with a simpler test. Could just be a bug in the tests rather than in the actual change.

JamesWrigley commented 2 weeks ago

I think this is a race condition caused by improper behaviour in DistributedNext.launch(::PersistentWorkerManager). If I read it correctly all the function does is assign some properties and then return without actually ensuring that the worker has been started. But Distributed/DistributedNext requires that the worker is running by the time launch() finishes because immediately afterwards the head node tries to connect to it, and that's what's causing the connection failures.

This happens in CI because the workers take an extra 1-2s to precompile, and that's more than the sleep(1) statement allows for. Adding a sleep(10) fixed it and you can see in the worker logs how long the precompilation took, but I think the real fix is to ensure that launch(::PersistentWorkerManager) only returns after the worker has started. And then we can also delete the sleep(1) call in the tests.

(feel free to delete/squash my debugging commits BTW, though I would recommend keeping in the calls I added to pipeline() and wait(worker).