Closed fire17 closed 2 years ago
Hi, You can look into this sample to get rectified images and the calibration from the ZED (without the SDK) : https://github.com/stereolabs/zed-opencv-native/tree/master/python
@fire17 did you every work our your problem?
I am also trying to triangulate the 3d xyz point from two sets of 2d points. But my 3d xyz values do not seem to make sense:
point_4d_hom = cv2.triangulatePoints(camera_matrix_left, camera_matrix_right, left_point, right_point)
good_pts_mask = np.where(point_4d_hom[3]!= 0)[0]
point_4d = point_4d_hom / point_4d_hom[3]
https://gist.github.com/stephanschulz/158fb66c8f7516e0f95bc10a846bdb3f#file-zed3-py-L202
Here is a short video in which I show how via the mouse I select the point pairs. The x,yz text print out shows how z most of the time is around -65
@stephanschulz Hi brother well let me say that it was a very long time ago, so im sharing based on some shallow memories to find the depth of a pixel we used template-matching from cv4 , the inverse of the disparity is the depth, dont remember if it was linear or we adjusted, either way, there are distortions based on the camera. To improve that you do the template matching based on rectified images, rectL rectR (i think also cv4). To get the rectified images you need to pass in some matricies that you get from calibration you do that with a checkerboard or something. Try to get a video of the board paning around from top-bottom and left right, until you get full coverage of the camera view, do that from close, med, med + range, take snapshots from that (especially around borders of the view) around 80 or so, and get the matricies from the calibration
after all of that you should be able to run the template matching with better results , we also made a "smarter" template matcher that will try a few times with different roi sizes, and select the result closest to the average (because template matching can trip due to the background being different from the 2 angles), so you zoom in or hone in on you match, try to bring the roi smaller and smaller (eventually its too small to work - too large and the disparity is inconsistent) the farther you go out that inconsistency will affecting the final z , thats why in my opinion, the template matching is much more crucial then the
after you get all the points, measure them in real space, project what you sampled and see the differences. even if you get something that is originally looks distorted, its ok, its more about consistency of your matching algorithm. if you're sampling consistently, you can always run matrix transformations (scale, stretch, rotate, pan, etc) until it's lining up the best with real model after the final matrix transforms, we were happy with our results something that really helped us was to capture points of cube, took pics of me standing in 4 points of a square (big) in the room, took my head and feet as points, you get a cube. then 4 more times in the midpoints of the cube, and once more in the dead-center. if you gather all these points, you should see clear straight lines from corner to corner with the midpoint inbetween. when you see that it will probably be a distorted cube, but if you can undistort it, you just calibrated you're entire 3d scene :)
as a final note, what we did was for 3d pose estimations, which later we found a model that gives you 3d keypoints from a single 2d camera view, which was amazing, and we only had to find the final matrix transformations to make it really good. later on we ditched the cameras completely and did our magic from the smartwatch
good luck and hope this helped in anyway <3 have a good one !
Thank you for the advice, i will see how i can implement some of it.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment otherwise it will be automatically closed in 5 days
Hello everyone! I just bought ZED and it looks very good Can anyone get me the correct Calib Matrixes for ZED?? couldnt find them in a valid structure (4x3)
I wish to triangulate a single 3D Point based on matching 2D points (matching them myself) from the Zed's L&R rgb images like this: (i have left&right coords) just need correct P1 & P2 of ZED
This is working great for me with another sensor, the issue is the Projection Matrixs P1 & P2 I couldn't find correct P1 and P2 for Zed
I got my prev P1&P2 from calibration like this:
My previously working P1 & P2:
P1: [[ 2.70446240e+03 0.00000000e+00 3.61979347e+02 0.00000000e+00] [ 0.00000000e+00 2.70446240e+03 7.55760490e+02 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00]]
P2: [[ 2.70446240e+03 0.00000000e+00 3.78336212e+02 -1.93749678e+04] [ 0.00000000e+00 2.70446240e+03 7.55760490e+02 0.00000000e+00] [ 0.00000000e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00]]
if I use these on images that came from ZED it is working but the 3D space is distorted Please Help me find the right Calib... THANK YOU VERY MUCH!! Tami