Closed jiaxinxie97 closed 5 years ago
We just ignored it and used the depth maps as they are when we run all compared methods. As you said, there are some known small horizontal offset between the RGB and depth images. That's why the errors close to the depth boundaries are higher for this dataset, if you look at the error maps for the depth estimation closely. A more rigorous way should be to map the TSDF volumes to the cameras (given camera poses and parameters) and get the depth maps.
Thank you! Since I can't find any files that record intrinsic and extrinsic parameters, mapping TSDF volumes to cameras seems infeasible. Ignoring these offset is the best solution for this dataset, maybe I can use other calibrated indoor dataset.
Hi, chao I met a problem when I do indoor crossdataset evaluation. The RGB image and depthmap I downloaded from their website are uncalibrated and I haven't found parameters that can help calibrate them. Then I visualize one pair, the distance between them seems can't be ignored. How do you deal with this problem?