pipeless-ai / pipeless

An open-source computer vision framework to build and deploy apps in minutes
https://pipeless.ai
Apache License 2.0
707 stars 32 forks source link

Creating pipes for GWakeup: Too many open files Trace/breakpoint trap (core dumped) #137

Closed udhay24 closed 6 months ago

udhay24 commented 6 months ago

Describe the bug When i try to open multiple streams i run into too many open files error To Reproduce Please describe the steps to reproduce the behavior:

  1. Run 30 + streams simultaneously on TensorRT runtime with rtsp as input
  2. The GPU usage is around ~35% during period

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

miguelaeh commented 6 months ago

Hi @udhay24 , Does it work if you increase the default limit?

You can do something like:

ulimit -n 2048

By default, it is in most cases 1024

udhay24 commented 6 months ago

Hi @miguelaeh , i tried increasing the ulimit and it fixed the issue.

Currently I am using a 15Gb ram and 4 VCPU spec, as i approach the 30+ streams. the RAM reaches the limit and it crashes the application. Do you have a suggested configuration or options i can use to make it efficient

Thanks

miguelaeh commented 6 months ago

Hi @udhay24 , I am glad it fixed it!

What you experience with the RAM is probably because some internal queues may be growing for all those streams, filling the memory with raw frames data. To avoid that you can set --stream-buffer-size <number> when starting pipeless. The number represents the maximum number of frames that can be maintained in memory per stream (note it is the number of frames, if your input frames have a big size you will need to set lower values). For example, if you set it 10 that means that, at maximum, each stream will have a queue of 10 frames to be processed (no matter the size of the frames). The frames that do not fit into that window will be discarded, so your streams will maintain real-time processing without filling the memory.

Finding the best value for your case requires you to play around with it a bit and test a few numbers.

There is an example of how to use the flag here: https://www.pipeless.ai/docs/docs/v1/examples/onnx-yolo-world#start-pipeless

Hope this helps!

udhay24 commented 6 months ago

Yeah, that fixed the RAM overflow issue Thanks a lot