-
Hi,
The segmented point clouds for training and testing contain respectively the following properties:
- [x, y, z, r, g, b, semantic, instance, visible, confidence];
- [x, y, z, r, g, b, vi…
wialb updated
3 months ago
-
hello, Thank you for your groundbreaking work!
Can this work be use to a dataset that only has a front-facing camera? for example SemanticKITTI.
-
Hello,
Thank you for your work on the paper and your code publication. I wish to ask if there is already a pre-trained model for KITTI-360 or a pipeline setup for it. I have been currently working …
-
Hi,
I just had a few questions regarding using our own data and running inference using PENet pretrained weights.
1) How sparse can the depth map be?
Currently, my inference image is from the …
-
Hi,
Thanks for your work so far! I am interested in adapting NoPe-NeRF to the KITTI-360 dataset and in particular using LiDAR data as an alternative method for depth supervision and establishing sp…
-
Hi, could you please provide the correspondence between SSCBench and the original KITTI360 IDs? I want to use the pose information from the original dataset's Oxts files, but I noticed that the origin…
-
Hi, thank you for sharing the great work.
I train the code on our own dataset, the result is odd.
I think I misunderstand the coordinate conversion.
The bellow is my result.
![image](https://githu…
-
Hi, thanks for your great work!
I encountered some problems when converting the depth map to a point cloud. I used Depth anything to predict the depth of images from the KITTI360 dataset, and when …
-
hi, @GANWANSHUI
thank you for releasing this code.
can i use this work for the single view dataset? i noticed this repo use three views from three cameras.
-
Hi,
Thanks a lot for this super cool work!
I have a quick question about the batch size. It appears to me that the train_kitti360.py script sets up the KITTI360DataModule, which uses `collate_fn…