Closed NoneGG closed 7 years ago
Does upstream code have this? If not, I suggest you take it up with them first.
If this behavior is ever proposed upstream, my bet is that core Python developers will reject it in limine.
concurrent.futures
is to never block. @agronholm I would reject and close this PR.
The whole point of an asynchronous framework like concurrent.futures is to never block.
concurrent.futures is not an asynchronous framework. Future.result() already blocks, by design of course. I do concur with the rest of the comments though.
@agronholm Sorry for my bad wording, I meant asynchronous execution framework, and the never block of course does not apply to Future.result()
.
Sure, thank you for your feedback~
problem
I found memory exhausted when i use concurrent.futures.ProcessPoolExecutor in prod env. The code is similar to code below:
reason
I found that if subprocess takes longer time to finish than submit (which is almost determined), the ProcessPoolExecutor._pending_work_items will be full of param submitted, and the dict seems to be too large, then the memory is exhausted. By the way, i doubt if del in Python2 can really release memory to system (seems to be optimized in Python3).
how to avoid
I add a check of _pending_work_items in ProcessPoolExecutor.submit before actually add new workItem to _pending_work_items. If
len(self._pending_work_items) > self._max_workers + EXTRA_QUEUED_CALLS
, it will block and wait for workItems in _pending_work_items be deleted.