Closed haiphamcse closed 6 months ago
We appreciate your highlighting the need for clarity regarding depth generation in our documentation. We are in the process of revising it to address this. In the meantime, you may proceed by following the instructions provided for SemanticKITTI, as the process for depth generation is analogous.
Thank you for the quick reply, can you provide the code to export (or the preprocessed pseudo point clouds) of MobileStereoNet on KITTI-360?
Regrettably, we are currently unable to share the code or preprocessed results. The raw stereo image data and associated code environment have been cleaned, and the size of proprocessed depth data exceeds ~80 GB, making it impractical for upload.
However, we would like to guide you on implementing KITTI-360 preprocessing using the existing MobileStereoNet codebase from VoxFormer. You can achieve this by making only the following two key modifications:
Hi there, loved your work. I want to ask whether your results on KITTI-360 uses the voxel proposal layer (I do see in the log that it does) and which model did you use for the depth prediction?