-
Thank you for the excellent work and release of the source code! I just want to ask if you have any plan to release the checkpoint trained on SemanticKITTI? I want to reproduce some baseline results f…
-
Thank you for your great work! But when I try to do the training, I met some problem.
```
Traceback (most recent call last):
File "/users/.conda/envs/m3net/lib/python3.8/site-packages/torch/mult…
-
How to get the image data of SemanticKITTI?
-
Thanks for your work! I have reproducted ptv3 with your codebase, but the mIoU on SemanticKITTI only reach about **65**. The config is the same as NuScenes, which is shown in your paper! I would like …
-
Hi, brilliant work.
I would like to know if the SemanticKITTI dataset provides official depth GT? I need to utilize accurate depth information.
Many thanks
UQHTy updated
1 month ago
-
Hi, I did not find the config file in the `configs/semantic_kitti` folder. Will it be released ?
-
hello, Thank you for your groundbreaking work!
Can this work be use to a dataset that only has a front-facing camera? for example SemanticKITTI.
-
I noticed that your test set here gives the test results directly, but as far as I know semantickitti's test results are supposed to be obtained by uploading the test tags, I'd like to ask "Syncing...…
-
thank you very much for your great work. I have a question about SemanticKITTI labels preprocess.
```
# Load segmentation label
label_file = osp.join(label_dir, '%06d.label'%(sid))
label = np.f…
-
According to your paper:" With the monocular image or multi-camera images as the
input, the multi-scale features are first extracted by the image encoder, and then lifted to 3D feature volume, ..."
…