drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
560 stars 72 forks source link

Evaluation on KITTI-360 Test set #30

Closed hansoogithub closed 11 months ago

hansoogithub commented 11 months ago

I have a problem viewing the performance evaluation numbers when i run

python src/eval.py experiment=kitti360  ckpt_path='downloaded checkpoint from your website'

below is the result i got

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃        Test metric        ┃       DataLoader 0        ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│         test/loss         │            0.0            │
│         test/macc         │            0.0            │
│         test/miou         │            0.0            │
│          test/oa          │            0.0            │
└───────────────────────────┴───────────────────────────┘

but when i trained a new model from scratch with the kitti360

python src/train.py experiment=kitti360

i can get view the numbers during training

val/miou_best: 63.757 val/oa_best: 92.886 val/macc_best: 79.989 

I get this warning during evaluation "You are using cuda device ('nvidia geforce rtx 4090') that has Tensor Cores. To properly utilize them you should set 'torch.set_float32_matmul_precision('medium' | 'high) which will trade-off precision for performance. For more details read https://pytorch.org/docs/stable/generated.torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision

i tried using high and medium precision but theres no change in the evaluation result.

i used this project in a docker container with gpu passthrough according to your setup. cuda11.8 please help, thank you for the project

drprojects commented 11 months ago

Hi @hansoogithub, thanks for your interest in the project.

I have a problem viewing the performance evaluation numbers when i run

This is normal behavior. KITTI-360's test set has held-out labels. Meaning you do not have access to the labels for performance evaluation, those are stored on a benchmarking server (see the official website. So the local performance evaluation of SPT can only be run on the validation set, as communicated in our paper. This is why you see empty test performance when running python src/eval.py experiment=kitti360.

I get this warning during evaluation "You are using cuda device ('nvidia geforce rtx 4090') that has Tensor Cores. To properly utilize them you should set 'torch.set_float32_matmul_precision('medium' | 'high) which will trade-off precision for performance. For more details read https://pytorch.org/docs/stable/generated.torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision

This is unrelated to the above comment. You can safely ignore this warning.

Best,

Damien

meehirmhatrepy commented 5 months ago

how to get evaluation metrics on validation data[kitti-360]? where should I mention that to get evaluation metrics?