Open websoft-ie opened 2 years ago
Hi @websoft-ie, virtualgimbal_ros package stabilizes camera feed in realtime. https://github.com/yossato/virtualgimbal_ros
Thank you for your support. Is it possible to try with the real time on windows without ros?
It is impossible.
2021年11月25日(木) 0:04 websoft-ie @.***>:
Thank you for your support. Is it possible to try with the real time on windows without ros?
— You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub https://github.com/yossato/virtualGimbal/issues/29#issuecomment-977962575, or unsubscribe https://github.com/notifications/unsubscribe-auth/AG6VLUKJ62SSCQWJV74ZWCDUNT5IRANCNFSM5IS7E3JQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
In order to stabilize, it requires both of the gyro and optical flow value for every frame. It is possible to calculate the optical flow for every frame in real time. If it is possible to get the gyro data for that frame too, how can I simulate the real time?
Sorry, but if possible, can we discuss a little on Skype?
@websoft-ie The stabilization requires a gyroscope angular velocity data, a video image and a camera calibration data for every frame. Optical flow is not required to do it. VirtualGimbal calculates optical flow to synchronize between a video and an angular velocity timing. These data are captured using different device so that it has time offset and drift since clock sources are not same . It estimates time offset between two clocks offset using optical flow and angular velocity precisely. Once it get the timing offset, the stabilization process doesn't calculate optical flow for every frame in real time.
If you have the camera feed and the angular velocity data in real time, a camera calibration data and a clock timing offset value, I think, you will stabilize your camera feed in real time. A higher frequency of the angular velocity's sampling rate is preferable since VirtualGimbal stabilizes video for every frame and every line of the frame to reduce a wobble from a rolling shutter CMOS. I tested 240 Hz and it is enough high.
Thank you for your support. But I can't find which function to use, even though I can get the angular velocity data for the frame. I want to know how to implement the function like this: function(in_frame, angular_velocity_for_this_frame)
Could you please this for me?
You say, if I can get the gyro data for frame by setting the higher frequency for gyro sensor, it doesn't need calculate the optical flow(estimated angular velocity)?
Is possible to run this algorithm in real time for mobile device? It takes 50ms to run the kernel algorithm for one frame on my PC.(Intel Corei-7700)
Hi, how are you? I would like to ask one thing. I got the desirable result by running your git repo. That seems to be perfect. Then, it is very slow (6~8 fps) on my workaround(Intel Corei-7700, Desktop PC). So, I resized the input video into 960X540 resolution with FFmpeg, and used this video as the input for this repo. It gave me the fast speed, but as that is for the rescaled video, the quality is not so good as the original video. Is it possible to scale down the resolution and find the adjustment, then apply the adjustment to the original resolution?
And another question: Should use the optical flow data for this, definitely? I looked through the code ,but in many parts of the code, it is using that optical flow data. Is it possible to stabilize the video using only IMU data?
I apologize for my lack of explanation. Virtualgimbal_ros works in real time, not this VirtualGimbal. This VirtualGimbal package will never work in real time. The reason is that this package calculates a stable camera pose by filtering the angles before and after the target frame. This requires future angular velocity information for the target frame, so it cannot be stabilized in real time.
This function is the implementation of real time stabilization. But it is in virtualgimbal_ros, not in this package.
Is possible to run this algorithm in real time for mobile device? It depends on both a computing resource of the mobile device and the video resolution, and virtualgimbal_ros requires OpenCL.
Is it possible to scale down the resolution and find the adjustment, then apply the adjustment to the original resolution? When you halve the resolution of the video, you need to halve the fx, fy, cx, cy components of the camera matrix in the camera calibration data. I have not tried it, but I think the value of line_delay needs to be doubled. The idea of finding the adjustment at half resolution and applying them to the original resolution is very interesting. I also think that if it is taking a long time to calculate the adjustment value, that would speed things up a lot. However, if you have to decode 4K video, transfer the data to the GPU's memory, stabilize it, return it to the CPU's memory, and encode it again, I think the memory bandwidth is most likely the bottleneck.
Should use the optical flow data for this, definitely? I looked through the code ,but in many parts of the code, it is using that optical flow data. Is it possible to stabilize the video using only IMU data?
VirtualGimbal uses the time difference between the video and the gyro sensor obtained from the optical flow to stabilize the video, so I think you are right, everywhere in the source code it looks like this package uses optical flow. However, as long as it is time synchronized, optical flow is essentially unnecessary for video stabilization. This function in another package, virtualgimbal_ros, does not use optical flow. In this other package, we also use optical flow in some parts of the source code to get unknown camera parameters, but it is essentially unnecessary.
A basic idea of the video stabilization is available on https://graphics.stanford.edu/papers/stabilization/
Thank you for your detailed answer. ... The idea of finding the adjustment at half resolution and applying them to the original resolution is very interesting. ...
So, can you make this module for me or can you guide how to resolve this? Best regards.
Hi, I compiled this and got the expected result as the sample. That looks very good. Then, is it possible to stabilize the camera feed directly, not the recorded video? This repo shows only for the recorded video. Please let me know ASAP, if possible. Thank you in advance.