Open andrea-pilzer opened 5 years ago
We have updated the extrinsic params for stereo, shown as below: R = np.array([ [ 0.996797, 0.0384542, -0.0701246], [ -0.038475, 0.999259, 0.00105552 ], [ 0.0701132, 0.0016459, 0.997538 ]])
T = np.array([-0.636491, -0.0158574, -0.0599776])
Besides, we have tested the same picture with the new params, the results are shown as follow:
Detailed results:
Some comments: In order to get even better alignment results, you can downsample twice and our original image resolution is really high.
Hi,
thanks a lot for your answer!
Now the sequences road02_ins
and road03_ins
are correctly rectified and I can use them with my model. Unfortunately I found that for sequence road01_ins
these parameters are not correct.
I am working on the stereo-based depth estimation. I am confused about how to preprocess the data and rectify the image. In my way, the disparity doesn’t match with the corresponding camera-5 and camera-6 image pairs. Could you please give me some detailed suggestions about how to use this dataset?
@HobertXu Hi, have you solved the problem? Need the stereo images to be rectified? I found them distorted very slightly...
Hi, I am using the
data_test.py
script you provided in utils to rectify the images of the data split for scene parsing, because I want to train an unsupervised depth estimation model. I found there is a few pixel misalignment when trying to put side by side the two images (see images at the bottom).I have a few questions:
Thank you.
The images I used (randomly picked from the dataset) are
road02_ins/ColorImage/Record001/Camera 5/170927_063819921_Camera_5.jpg
androad02_ins/ColorImage/Record001/Camera 6/170927_063819921_Camera_6.jpg
This is zoom in the area where I found the problem: This are the two rectified images: