skp-1997 / videoStabilizationOpenCV

The code is primitive attempt to achieve video stabilization using OpenCV and without Deep Learning approach. It uses foundational computer vision techniques like feature detections, optical flow, transformation and warping.
2 stars 0 forks source link
computer-vision feature-detection image-transformations opencv optical-flow video-stabilization

Video Stabilization using Optical Flow

This code is simple attempt to stabilize (smoothen) a video using just traditional computer vision techniques using OpenCV. It's not best but will surely help learn many small concepts in computer vision like feature detection, optical flow, transformation and warping.

Demonstration:

https://github.com/skp-1997/videoStabilizationOpenCV/assets/97504177/7abd54a9-cde6-4c06-968b-88dc28d48825

Steps for the Video Stabilization

[1] Detectng features from the frame

Here, I am using 'goodFeaturesToTrack' from OpenCV to detect feature points

Screenshot 2024-02-17 at 7 11 31 PM

[2] Calculating Optical Flow

I am using 'calcOpticalFlowPyrLK' from OpenCV to calculate optical flow in concurrent frames from features ppints detected in previous frame. It uses Lucas-Kanade Pyramid method to calculate the pixel positions.

Screenshot 2024-02-17 at 7 11 31 PM

[3] Estimate motion b/w two frames

With the help of 'estimateRigidTransform' module, I calculated the transformation values [x, y, theta] b/w frames.

Screenshot 2024-02-17 at 7 11 31 PM

To get the idea of how it smoothen the curve, here is the picturization.

Screenshot 2024-02-17 at 7 16 17 PM

[4] Calculate the smooth motion for entire video

First, I use 'numpy.cumsum' to get trajectory for entire video, which later was used to smoothen the transformation using filtering. I am using 'MovingAverageFilter', the logic is defined below.

Screenshot 2024-02-17 at 8 25 22 PM

The filter is applied trajectory matrix which smoothen values along translation along x, y and rotation along x direction.

[5] Warping using smoothen transformation matrix calculated before.

Using 'cv2.warpAffine' to wrap consecutive frames from the filtered trajectory matrix.

[6] Fixing the borders

Since we are warping image, to maintain the frame size. This will led to some dead pixels along border which will be visible in the output video.

References:

  1. Video Stabilization Using Point Feature Matching in OpenCV - Abhishek Singh Thakur https://learnopencv.com/video-stabilization-using-point-feature-matching-in-opencv/

  2. Optical Flow in OpenCV (C++/Python) - Maxim Kuklin (Xperience.AI) https://learnopencv.com/optical-flow-in-opencv/

  3. CS231M · Mobile Computer Vision - Standford University https://web.stanford.edu/class/cs231m/lectures/lecture-7-optical-flow.pdf

Scope of Improvement

The method is primitive and doesn't work if there are objects moving in video at faster pace. The other approached would be to find where optical flow is maximum and compensate for that using mathematical logic or use deep learning model.