YuhangMing / Semantic-Indoor-Place-Recognition

7 stars 1 forks source link

Didn't find keyframe's RGB image nor Depth image #3

Closed jingGM closed 1 year ago

jingGM commented 1 year ago

Hi Yuhang, I'm trying your dataset in my project, but I couldn't find the keyframes' original rgb image or depth image. It might not be decent to use the reconstructed data to do place recognition, do you have the comparison results with the original data or aligned rgbd images instead of the points after reconstruction?

YuhangMing commented 1 year ago

Hi,

The original RGB and depth images used in the experiments are directly taken from the ScanNet dataset. And because they are exactly the same, I might not be able to share those. You can find them in the original ScanNet dataset. All the point cloud files are named as "{scene-id}_{frame-id}_sub.ply". It should be straightforward to get the corresponding RGB and depth images from the original ScanNet dataset.

As discussed in the paper, the only reason we used reconstructed data is that we want to "minimise the effect of the noisy depth measurements and the incomplete reconstruction of single views". The comparison examples of directly back-projected point cloud and the reconstructed data can be found in Figure 5.5 in my dissertation.

Hope this helps, Yuhang

YuhangMing commented 1 year ago

Issue closed as no further discussions.