roboflow / supervision

We write your reusable computer vision tools. πŸ’œ
https://supervision.roboflow.com
MIT License
24.21k stars 1.8k forks source link

Increasing Video FPS running on CPU Using Threading #1411

Open dsaha21 opened 3 months ago

dsaha21 commented 3 months ago

Search before asking

Description

I want to increase FPS of a video running on my CPU system. I tested with few annotated and object tracking videos. When I am running the frames without passing through the model the fps is still low thus resulting lesser while passing them through YOLO or any model.

The code snippet I am using is

VideoSpeed1

So, with the following method and running the normal frames I am getting something like the following :

VideoSpeed2

With normal supervision's frame generator - fps is around 1-10 max With threading its increasing to a greater value

Use case

If we notice there is a significant change with threading. I was wondering if we could add a MainThread Class in the supervision utils in sv.VideoInfo or add a total new class so that frames running on CPU can have such fps. Let me know if we can handle such case. I can share the python file on drive if necesssary.

Thanks

Additional

No response

Are you willing to submit a PR?

yeldarby commented 3 months ago

Have you tried InferencePipeline from our other open source repo? It handles multithreading for video and can even handle processing multiple streams concurrently.

dsaha21 commented 3 months ago

Hi @yeldarby, Let me give it a try with the Inference Pipeline. If its successful, I will close the issue.

Thanks for the help πŸ‘

SkalskiP commented 3 months ago

Hi @dsaha21, get_video_frames_generator was meant to be a very simple utility. I agree with @yeldarby. If you want a higher fps throughput InferencePipeline is for you. Also are you sure you got 0.17 fps? It seems super low.

dsaha21 commented 3 months ago

Hi @SkalskiP, Yes actually its very slow. I am trying by resizing the frames and InferencePipeline like mentioned above. Will let you if it runs with a good fps.

Thank you :)

Sapienscoding commented 1 month ago

Hi @dsaha21, were you able to fix it, if not I can submit a pull request on this and work on it

dsaha21 commented 1 month ago

Hi @Sapienscoding, you can continue with this .. I really did not get the time to continue this.

I opened the issue after reading an article posted by pyimagesearch about speeding up fps using threading. However, before continuing with threading you can go through this https://inference.roboflow.com/using_inference/inference_pipeline/

I hope it will solve.

LinasKo commented 1 month ago

Hi @Sapienscoding πŸ‘‹

Great to see you're eager to help us out! I'm assigning this to you.

Sapienscoding commented 1 month ago

Hi @dsaha21, what and where did you change to see the difference in frames improvement?

dsaha21 commented 1 month ago

Hi @Sapienscoding, you can follow the steps :

1. pip install inference
2. pip install inference-gpu ( If you have an NVIDIA GPU, you can accelerate your inference with )

3. # import the InferencePipeline interface
    from inference import InferencePipeline
    # import a built-in sink called render_boxes (sinks are the logic that happens after inference)
    from inference.core.interfaces.stream.sinks import render_boxes

    api_key = "YOUR_ROBOFLOW_API_KEY"

    # create an inference pipeline object
    pipeline = InferencePipeline.init(
        model_id="yolov8x-1280", # set the model id to a yolov8x model with in put size 1280
        video_reference="https://storage.googleapis.com/com-roboflow-marketing/inference/people-walking.mp4", # set the video reference (source of video), it can be a link/path to a video file, an RTSP stream url, or an integer representing a device id (usually 0 for built in webcams)
        on_prediction=render_boxes, # tell the pipeline object what to do with each set of inference by passing a function
        api_key=api_key, # provide your roboflow api key for loading models from the roboflow api
    )
    # start the pipeline
    pipeline.start()
    # wait for the pipeline to finish
    pipeline.join()

Please try this on a google colab first. If you have an roboflow apikey - its very good otherwise download a manually - like for e.g yolov8n.pt for basic object detection. Then start the inference pipeline and test the fps.

@LinasKo Just wanted to know that if we dont have a roboflow apikey, is the above thing like manually downloading the model a correct thing to do ? Please let me know once. Then @Sapienscoding can follow the above steps.

LinasKo commented 1 month ago

There is a set of models that do not need an API key: https://inference.roboflow.com/quickstart/aliases/

All others will need a key.

@dsaha21, you gave a very good example of using InferencePipeline, but it doesn't provide a way to speed up our frame processing on CPU, especially if we choose not to use inference. You mentioned experimenting with threading - do you still have an example that produced an improvement?

dsaha21 commented 1 month ago

@LinasKo Actually I did not test the algorithm using threading, very sorry. I will try testing it. Till then I think let give @Sapienscoding a chance to try as he told he will approach with this issue.

If the testing is done I will post the fps improvement ASAP

dsaha21 commented 1 month ago

Hi @dsaha21, what and where did you change to see the difference in frames improvement?

@Sapienscoding My plan was that first I will check normal fps by running with supervision library.

Then I will use Queue data structure using threading. There will be a class name VideoStream where it will contain start(), stop(), update(), read() functions. After that run the class with a video uploaded on the system and check fps on CPU.

This was my plan. Have you tried like this once ?

Sapienscoding commented 1 month ago

@dsaha21 I'm thinking of using asyncio to process frames during inference. However, can you test it out like @LinasKo mentioned, if you're getting any improvement in FPS