stereolabs / zed-opencv

ZED SDK interface sample for OpenCV
https://www.stereolabs.com/docs/opencv/
MIT License
137 stars 79 forks source link

Calculating Depth Map not using ZED SDK #6

Closed wkyoun closed 2 years ago

wkyoun commented 8 years ago

I would like to calculate the disparity map and depth map not using ZED SDK. ( I have some experiences in stereo calibration, and rectification)

And also, I would like to compare the result of the depth map between using ZED SDK and own Code.

zed->normalizeMeasure(sl::zed::MEASURE::DEPTH)).

In order to do that,

I need intrinsic parameter and extrinsic parameter of left and right camera of ZED.

My questions is as follows:

1) How can I develop the code to calculate the disparity map and depth map not using ZED SDK?

First of all, I will definitely need intrinsic parameter and extrinsic parameter of left and right camera of ZED.

2) would any one of you open to the public about ZED SDK(e.g., zed->normalizeMeasure(sl::zed::MEASURE::DEPTH)))?

jonra1993 commented 6 years ago

Maybe you could try Point Cloud Library. It has some algorithms for creating Depth maps.

WASCHMASCHINE commented 6 years ago

This issue is very old. OpenCV has its own stero algorithms. Use them to compute a disparity map, which can be converted to depth via focal length and baseline. There is also LIBELAS and several other GPU stereo algorithms, even on Github.

santiagomalter commented 6 years ago

Hello! I'm also trying to build a depth map without the SDK. @WASCHMASCHINE could you give more informations about how you proceed or eventually code samples? I did some testing with OpenCV example with poor results.

WASCHMASCHINE commented 6 years ago

@santiagomalter The easiest way would be use the ZED SDK to get a rectified left image and rectified right image on the CPU as OpenCV Mats. Then you can proceed as already described. Focal length and baseline are in the calibration file, depending on ZED's resolution. There are even some tutorials for stereo matching with OpenCV like http://docs.opencv.org/trunk/d3/d14/tutorial_ximgproc_disparity_filtering.html

santiagomalter commented 6 years ago

I don't have a cuda-compatible GPU so no SDK, that's why I'm asking. Thanks!

WASCHMASCHINE commented 6 years ago

Oh, I see. You could try to get some images with OpenCVs Webcam interface. If you're lucky. I read somewhere in the old documentation (What happened there, Stereolabs?) that the camera is streamed in YUV 4:2:2 to the GPU, so you might need to account for that. Also the left and right image are side-by-side in one image.

santiagomalter commented 6 years ago

I've managed easily to get left/right images and them apply basic stereo algorithms from OpenCV in Python. But unfortunately my results are pretty bad at this time...

I understand that it's part of what the SDK does. But I cannot use it at this time and I believe that what I try to achieve is pretty basic compared to what the ZED is capable of. (I don't need positional tracking for instance)

Did anyone have an example of real-time depth map with Python/OpenCV through UVC and without the SDK + its requirements? Eventually using the device calibration data for increased accuracy?

Moreover, an official example in native Python / OpenCV would open many opportunities for platforms that are incompatible with the SDK.

WASCHMASCHINE commented 6 years ago

Have you undistorted and rectified your images? If you get images, you will just need to brush up on your OpenCV. They even provide stereo calibration code, so you can disregard the ZED calibration if you print a checkerboard on a flat surface.

I don't think Stereolabs will provide a CPU implementation of their algorithm since they advertise with high framerates.

github-actions[bot] commented 2 years ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days