vutran1710 / PyrateLimiter

⚔️Python Rate-Limiter using Leaky-Bucket Algorithm Family
https://pyratelimiter.readthedocs.io
MIT License
334 stars 36 forks source link

Multiprocessing clean-up error #154

Closed jthart-freeday closed 5 months ago

jthart-freeday commented 6 months ago

We use PyrateLimiter with a Redis bucket in our Django application, which runs on Google Cloud Run. When a process waits too long, our Cloud Run instance will terminate due to a timeout. When this happens, we get the below error.

Exception ignored in: <function Pool.__del__ at 0x3e1935551da0>
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/multiprocessing/pool.py", line 271, in __del__
    self._change_notifier.put(None)
  File "/usr/local/lib/python3.11/multiprocessing/queues.py", line 377, in put
    self._writer.send_bytes(obj)
  File "/usr/local/lib/python3.11/multiprocessing/connection.py", line 200, in send_bytes
    self._send_bytes(m[offset:offset + size])
  File "/usr/local/lib/python3.11/multiprocessing/connection.py", line 427, in _send_bytes
    self._send(header + buf)
  File "/usr/local/lib/python3.11/multiprocessing/connection.py", line 384, in _send
    n = write(self._handle, buf)
        ^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 9] Bad file descriptor

It looks like a task is waiting for its turn to execute, but the Cloud Run instance terminates while it's waiting. Somehow it seems like the clean-up of the processes fails in this scenario.

Do you have any idea what could cause this?

vutran1710 commented 6 months ago

would this help? https://stackoverflow.com/questions/36596805/python-multiprocessing-claims-too-many-open-files-when-no-files-are-even-opened

vutran1710 commented 6 months ago

You can create your own Threadpool with smaller limit of process and pass it to Limiter like this https://github.com/vutran1710/PyrateLimiter/blob/master/pyrate_limiter/limiter.py#L71

vutran1710 commented 5 months ago

The latest version (3,4.1) has this fixed! Please upgrade!