Closed niosus closed 6 years ago
Hi,
Well, unfortunately, it is currently not possible with the ZED SDK itself since the rectification is also using CUDA.
However, thanks for the feedback, we might look into this in the future. In the meantime, you could create your own wrapper using openCV to get RGB rectified images.
Oh, this is a bummer :(
I guess the current standard for stereo camera in mobile robotics research is the PointGrey Bumblebee camera, but it uses Firewire and the resolution is suboptimal. You guys have a killer camera for this, the only thing missing is actually providing people like us with data to work with. Then you probably would rule the scientific platform.
Anyway, I will keep following your blog for updates on this. Hope getting the rectified images will be possible in a foreseeable future. Alternatively, it would be just enough to have calibration data (or even distort, undistort tables) for the cameras along with unrectified images. I am happy to write my own rectification tool.
You can actually already get the calibration data. It is stored in a file in /usr/local/zed/settings/SN{Your serial number}.conf
or in C:\Users\YOUR_USER_NAME\AppData\Roaming\Stereolabs\settings\
.
Oh, awesome, I missed it somehow. Now I would just need the unrectified image stream. :)
The ZED is UVC compliant so it's basically a standard webcam. You can simply use OpenCV VideoCapture.
Like this : https://gist.github.com/adujardin/242276a4796ef425a33af70a41a02e00
Hi, I made a simple zed cpu ros warpper just in case of anyone want to use it. https://github.com/transcendrobotics/zed_cpu_ros
Is there documentation somewhere on how to configure the Zed through UVC? I.E, can I set frame-rate/resolution/etc through it?
Do you have any tips on how to rectify the image using the camera calibration params and opencv?
Hi, I'm dealing with the same problem (capturing stereo video stream on a robot without a GPU). I've been able to obtain stereo video stream through the v4l2 library and now I'm writing the rectification part.
However I'm not sure about the meaning of several calibration parameters. Specifically, the convergence, rx(tilt) and rz(roll) parameters, which can be found in the settings dialogue in ZED Explorer.
It seems these are related to the relative rotation between the two cameras. But how can I get the relative rotation matrix from these parameters? I want a 3x3 relative rotation matrix R and a 3x1 translation vector T so that a 3D point in the right camera coordinate can be computed as X{right} = R * X{left} + T. (AFAIK, This is also the convention used in OpenCV).
Does anyone has a clue? Thank you very much!
willdzeng, great thanks for zed_cpu_ros
!
Thank you @willdzeng for your effort in publishing the zed_cpu_ros wrapper. Will it be possible to use this wrapper in Raspberry Pi 3 and Odroid XU4 for obtaining stereo images?
Since the issue seems to be solved I close and you can continue to comment on https://github.com/transcendrobotics/zed_cpu_ros
Hi, I made a simple zed cpu ros warpper just in case of anyone want to use it. https://github.com/transcendrobotics/zed_cpu_ros
@willdzeng Thumbs up! what a greate work :)
Just a quick note for those who, like me, needs only to take still pictures with the ZED: it works out of the box with the windows 10 camera app. I think using Cheese on ubuntu could work too but have not tested yet.
@french-paragon the ZED cameras are all UVC compatible, so you can use every kind of UVC software (cheese, guvcview, etc) to acquire frames as if using a simple webcam. Furthermore, there is an open-source driver that you can use to control the cameras and retrieve sensors data if using ZED2 or ZED Mini: https://github.com/stereolabs/zed-open-capture
Hello! Thanks for creating such an awesome camera!
Currently, the wrapper only builds with ZED SKD which only installs itself if CUDA is present. Is there any way to publish stereo images (no depth) without the need for a GPU?
The motivation behind this is that GPU is a huge energy monster, we would like to use the camera on a drone, and we have our own algorithm for depth estimation, so essentially we just need two streams of rectified images.