Closed AnaelGorfinkel closed 1 year ago
I'm unable to reproduce the problem. What am I supposed to be seeing in the output? The reported number of processes remains the same at every iteration. This is what I got:
Note that your code counts all Python processes, not just the ones spawned by the script.
Things to check first
[X] I have searched the existing issues and didn't find my bug already reported there
[X] I have checked that my bug is still present in the latest release
AnyIO version
3.7.0
Python version
3.10.11
What happened?
I encountered an issue while utilizing
to_process.run_sync()
in conjunction with a process pool. The problem arises when the process pool loses track of instances, leading to excessive memory consumption.In my scenario, I needed to execute a CPU-intensive task asynchronously multiple times at unspecified intervals. Recognizing the overhead involved in process creation, I turned to the solution of maintaining a process pool. I implemented your provided code with the following parameters:
While I built my code upon your process pool maintenance solutions, I encountered a noticeable surge in memory consumption. Additionally, I observed that the count of active processes surpassed the limit I had set (exceeding 30 processes).
Assumption Regarding the Root Cause: Upon investigating your code, I've observed that the issue may be related to AnyIO losing the process pointer while identifying idle processes that are meant to be terminated.
Following my investigation, I engaged with your code locally and successfully resolved the problem. The code now functions as intended.
How can we reproduce the bug?
Notice that I used psutil package in order to track the current number of python processes.