Open weidongguo opened 6 years ago
Hi weidongguo I'm testing the ROS version and I have a similar problem.
I'm thinking about the problem is in the camera. I've tried with a remote PiNoir camera processing the image in my laptop with SVO and with the Logitech C920 and any of them have a global shutter. I want to try with a fisheye camera for improving the frame angle. What are your results with the GoPro 4?
Hi @weidongguo @odinhr
I have the exact same problem. Did you manage to solve this issue?
Hi @weidongguo @odinhr I am facing the same problem. It is not recognizing key features properly from my camera even though it is quite a feature rich environment that I am testing in. Any ideas on how to improve it?
Any solution? I have the same issue.
Our team has tested the code (ROS version) with a live camera. We couldn't reproduce the same performance that the demo has in the video (https://www.youtube.com/watch?v=2YnIMfw6bJY).
Here is our result: https://drive.google.com/file/d/19Qy4BdTpAHXQZo3rexffL6604cIkSkC_/view?usp=sharing
We notice that if we move the camera fast, most of the features points are lost, and therefore the pose estimation is no longer accurate. To help it, we can make the camera still and wait until it gets enough features points again. Once ready, if we move the camera, its pose estimation is accurate. Immediately after we have gone far enough that the features points aren't enough anymore (basically, current frame being so different from the key frame), the pose estimation is again not accurate.
On other other hand, if we simply replay the bag file as provided by the source code, the pose estimation is so accurate even when the camera moves fast.
We have tried with two camera (Pinhole and calibrated): GoPro 4 Hero Silver with 1080p 90fps and 720p - 30fps settings. A webcam with 480p settings (this is the one used in our video)
We are looking for hints that can help us reproduce the performance as if the ROS bag file was replayed but with a live camera. Any help is appreciated.