Closed jonhoo closed 8 years ago
Hmm, yeah I can see why that would happen - r2d2 was designed under the assumption that the pool is going to exist for the lifetime of the process. Should be fixable, maybe via weak references in the background worker tasks or an explicit shutdown call in Pool's destructor.
Yeah, we only started running into this when running benchmarks; each benchmark constructs one instance of our application, and each instance starts its own pool (naturally, because it doesn't know about the other pools). One way to fix this would be to have SharedPool::Drop
wait for all the workers to finish before returning. Then, read_connections
could hold a *const
(thus allowing the Arc
to be dropped) while letting the workers keep using it all the way until they return.
I don't know if I feel sufficiently confident about the intricacies of the code to try and write a PR for this without some guidance. Any chance you'll be able to give it a whirl?
Yep, I should be able to poke at it tonight.
@sfackler: great, thanks!
Released v0.6.4
I'm having an issue with a long-running application that the connections made by r2d2 are not dropped after I drop the last reference to the pool. After digging through the code, I found that
new_inner
creates a second copy of theArc
here, which is then moved into the closure passed toread_connections
. As far as I can tell, that closure is never dropped, which means the second reference to theArc
is never dropped, which in turn prevents theSharedPool
theArc
wraps from being dropped. Since theSharedPool
is never dropped, the workers are never stopped, and so the connections are all left open indefinitely.With some debug statements inserted into r2d2 (patch below), I get the following output for my application:
Notice in particular how, by the time
Pool::new
returns, theArc
already has two strong references, and how, when my application drops its last reference (right before the "all done" line), there is still a leftover strong reference.Debug patch: