AdamSpannbauer / python_video_stab

A Python package to stabilize videos using OpenCV
https://adamspannbauer.github.io/python_video_stab/html/index.html
MIT License
682 stars 118 forks source link

Improve frame rate for live video #97

Closed pch9520 closed 4 years ago

pch9520 commented 4 years ago

I have tried your demo with live video stream,but the delay is badly,I think I can't use this demo to process image in real time. Although the video has 10 fps , I think the video is too stable to lead to the severe delay. How should I do to let it be a little bit instability to improve real timing.

AdamSpannbauer commented 4 years ago

The level of stability of the input video has no effect on processing time; just the number of frames being processed.

To decrease the delay while processing a live video you can change the smoothing_window* parameter which has a default value of 30. Lowering this number will lower the delay. I will admit the speed of processing for live video is not great, but you can make the call if it's good enough for you use case after lowering this value.

*The smoothing_window parameter adjusts the window size for a moving average of the stabilizing transforms. To do a moving average across 30 frames, the output needs to be delayed 30 frames.

pch9520 commented 4 years ago

OK, thank you anyway , I will have a try later ! @AdamSpannbauer

pch9520 commented 4 years ago

I have tried your method,changing the value of smoothing_window to 1 satisfies my needs.But the video only has 10 fps,before applying this method,it has 30 fps.

AdamSpannbauer commented 4 years ago

To clarify. Could you confirm if I understand the issue correctly?

pch9520 commented 4 years ago

This is my code:

stabilizer = VidStab()
vidcap = cv2.VideoCapture(0)
start_time = timer()
while True:
     grabbed_frame, frame = vidcap.read()
     if frame is not None:

        pass

     stabilized_frame = stabilizer.stabilize_frame(input_frame=frame,
                                                   smoothing_window=1)
     curr_time = timer()
     fps = int(1/(curr_time-start_time))
     img = cv2.putText(stabilized_frame,str(fps),(20,30),cv2.FONT_HERSHEY_COMPLEX,1.2,(0,0,255),2)
     if stabilized_frame is None:
         # There are no more frames available to stabilize
         break

     # Perform any post-processing of stabilized frame here
     cv2.imshow('Test',stabilized_frame)
     start_time = timer()
     if cv2.waitKey(1) & 0xFF == ord('q'):
         break

If I only use cv2.VideoCapture,the rate of processing is 30fps. If I add the VidStab,the rate of processing is 10fps.

AdamSpannbauer commented 4 years ago

Thank you for providing code. I’ll look into this and see if there’s a way to speed things up.

AdamSpannbauer commented 4 years ago

Going to investigate idea proposed in #99 of processing frames at smaller size (proposed by @bryanbosworth).

AdamSpannbauer commented 4 years ago

I've implemented the suggestion of restricting frame size when processing. It leads to better processing speeds but it does affect the output. I wanted to give users the ability to set this since there is likely a different quality/speed tradeoff for each use case. This has been implemented as the processing_max_dim1 attribute on the VidStab class that can be set when creating a VidStab instance (see example code below).

Below are fps measures2 on my machine3 for selected values of processing_max_dim:

Example of using parameter with live video:

from collections import deque
from datetime import datetime
import cv2
import numpy as np
from vidstab import VidStab
from imutils.text import put_text

# Default is to process at original size to avoid loss of quality
# Identical to setting processing_max_dim=float('inf')
stabilizer = VidStab()

# Optionally set processing_max_dim to improve processing speed
# Trade off of processing speed and quality of output
stabilizer = VidStab(processing_max_dim=500)

vidcap = cv2.VideoCapture(0)
start_time = datetime.now()
curr_time = datetime.now()

fps_window = deque(maxlen=30)
while True:
    grabbed_frame, frame = vidcap.read()
    if frame is not None:
        pass

    stabilized_frame = stabilizer.stabilize_frame(input_frame=frame,
                                                  smoothing_window=1)
    curr_time = datetime.now()
    seconds = (curr_time - start_time).total_seconds()
    fps = 1 / seconds
    fps_window.append(fps)
    fps_mean = np.mean(fps_window)

    img = put_text(stabilized_frame,
                   f'{fps_mean:.2f} fps ({fps_window.maxlen} frame rolling mean)\n'
                   f'processing_max_dim: {stabilizer.processing_max_dim}',
                   (20, 30),
                   cv2.FONT_HERSHEY_SIMPLEX,
                   1.2, (0, 0, 255), 2)

    if stabilized_frame is None:
        break

    cv2.imshow('Frame', stabilized_frame)

    key = cv2.waitKey(1)
    if key == 27:
        break

    start_time = datetime.now()

1processing_max_dim documentation:

:param processing_max_dim: Working with large frames can harm performance (especially in live video).
                           Setting this parameter can restrict frame size while processing.
                           The outputted frames will remain the original size.

                            For example:
                            * If an input frame shape is `(200, 400, 3)` and `processing_max_dim` is
                              100.  The frame will be resized to `(50, 100, 3)` before processing.
                            * If an input frame shape is `(400, 200, 3)` and `processing_max_dim` is
                              100.  The frame will be resized to `(100, 50, 3)` before processing.
                            * If an input frame shape is `(50, 50, 3)` and `processing_max_dim` is
                               100.  The frame be unchanged for processing.

2 fps was rolling mean with 30 frame window 3 tests were run on a MacBook Pro (2.8 GHz Quad-Core Intel Core i7; Catalina 10.15.2)

AdamSpannbauer commented 4 years ago

@pch9520 & @bryanbosworth,

Would either of you be willing to test the code for your use case? I don't have a good, real use case to test this new feature on. So I wouldn't be able to assess the success of the implementation accurately.

If you are willing to test, the code is in the live_framerate_97 branch, and can be directly installed with the below pip install command. I bumped the version number on the branch to 1.7.4 for you to be able to verify if you have the right copy of the code to test.

Install:

pip install git+https://github.com/AdamSpannbauer/python_video_stab.git@live_framerate_97

This requires cloning the full repo, which is unfortunately bloated right now. So it is expected to take longer than the average pip install vidstab.

Verify correct version:

import vidstab
assert vidstab.__version__ == '1.7.4'
pch9520 commented 4 years ago

Thanks for your help very much! I'm on vocation,I don't take a camera from my lab, I will have a try about two weeks later!

AdamSpannbauer commented 4 years ago

/remind me to check in on feedback in 1 week

reminders[bot] commented 4 years ago

@AdamSpannbauer set a reminder for Feb 2nd 2020

pch9520 commented 4 years ago

Sorry,because of infectious disease broke out in China, I may go back my lab about Feb 20th 2020

AdamSpannbauer commented 4 years ago

No problem at all. I hope that you are well. I just set the reminder so I wouldn't forget to check back in on this issue.

I've additionally followed up with another user in #99 who reported the same issue.

pch9520 commented 4 years ago

Thank you anyway!

bryanbosworth commented 4 years ago

Hi Adam, I was able to test version 1.7.4 earlier this week and I found that the processing_max_dim parameter did make a big difference. I could process roughly 4-5x the previous number of pixels at the same frame rate and I could see in the code timing that the image transformations are now the limitation in scaling up to the HD resolution I want to use.

I had some issues initially sending data fast enough to separate processes to improve throughput, but I can fix this and put some better numbers on the max performance I get currently with live video. Let me know if that would be helpful.

AdamSpannbauer commented 4 years ago

Hi @bryanbosworth, thanks for testing!

I found that the processing_max_dim parameter did make a big difference. I could process roughly 4-5x the previous number of pixels at the same frame rate

Was the output still satisfactory or was there a noticeable tradeoff between quality of the stabilization and speed for your use case?

I could see in the code timing that the image transformations are now the limitation in scaling up to the HD resolution I want to use

How far off are we now? Are we pretty close or still a long way to go?

AdamSpannbauer commented 4 years ago

Please let me know if there are any issues with this new feature. If I don't get back any feedback, I think I plan to merge the feature and release within about a week.

/remind me to check in on feedback in 1 week

reminders[bot] commented 4 years ago

@AdamSpannbauer set a reminder for Feb 21st 2020

pch9520 commented 4 years ago

I have not returned to my lab , so I have no issue.

reminders[bot] commented 4 years ago

:wave: @AdamSpannbauer, check in on feedback