drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
673 stars 81 forks source link

how to get evaluation metrics on validation data[kitti-360]? where should I mention that to get evaluation metrics? #90

Closed meehirmhatrepy closed 8 months ago

meehirmhatrepy commented 8 months ago
          how to get evaluation metrics on validation data[kitti-360]? where should I mention that to get evaluation metrics?

_Originally posted by @meehirmhatrepy in https://github.com/drprojects/superpoint_transformer/issues/30#issuecomment-2031487313_

drprojects commented 8 months ago

The metrics are computed by default when you train a model on KITTI-360.

Tet, if you want to compute metrics on the validation set using our pretrained model, you can do so with minor modifications to the eval.py script. Replace:

trainer.test(model=model, datamodule=datamodule, ckpt_path=cfg.ckpt_path)

by

trainer.validate(model=model, datamodule=datamodule, ckpt_path=cfg.ckpt_path)

This is PyTorch Lightning syntax. The main difference is that the former will use datamodule.test_dataloader(), while the latter will use datamodule.val_dataloader().

Then running the following should compute validation metrics for KITTI-360:

python src/eval.py experiment=kitti360  ckpt_path='downloaded checkpoint from your website'

PS: If you ❤️ or use this project, don't forget to give it a ⭐, it means a lot to us !