PRBonn / semantic-kitti-api

SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
http://semantic-kitti.org
MIT License
750 stars 185 forks source link

assertion error when performing semantic segmentation evaluation on test split #30

Closed amiltonwong closed 4 years ago

amiltonwong commented 4 years ago

Hi, authors,

When I perform the semantic segmentation evaluation on test split as guided: ./evaluate_semantics.py --dataset /data2/kitti_dataset/dataset/ --predictions /media/root/mdata/dataset/lidar-bonnetal_models/predictions/knn_postprocess/darknet53-knn --split test

I got the following assertion error:

(pytorch1.1) root@Lab-PC:/data/code11/semantic-kitti-api# ./evaluate_semantics.py --dataset /data2/kitti_dataset/dataset/ --predictions /media/root/mdata/dataset/lidar-bonnetal_models/predictions/knn_postprocess/darknet53-knn --split test
********************************************************************************
INTERFACE:
Data:  /data2/kitti_dataset/dataset/
Predictions:  /media/root/mdata/dataset/lidar-bonnetal_models/predictions/knn_postprocess/darknet53-knn
Backend:  numpy
Split:  test
Config:  config/semantic-kitti.yaml
Limit:  None
Codalab:  None
********************************************************************************
Opening data config file config/semantic-kitti.yaml
Ignoring xentropy class  0  in IoU evaluation
[IOU EVAL] IGNORE:  [0]
[IOU EVAL] INCLUDE:  [ 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19]
Traceback (most recent call last):
  File "./evaluate_semantics.py", line 169, in <module>
    assert(len(label_names) == len(pred_names))
AssertionError

As test split doesn't contain GT labels, thus label_names in line 169 in evaluate_semantics.py will be empty list, which conflicts the assertion statement. Is there any mistake in my command? ./evaluate_semantics.py --dataset /data2/kitti_dataset/dataset/ --predictions /media/root/mdata/dataset/lidar-bonnetal_models/predictions/knn_postprocess/darknet53-knn --split test

THX!

amiltonwong commented 4 years ago

supplement: I can run the evaluation script for valid split smoothly ./evaluate_semantics.py --dataset /data2/kitti_dataset/dataset/ --predictions /media/root/mdata/dataset/lidar-bonnetal_models/predictions/knn_postprocess/darknet53-knn --split valid

tano297 commented 4 years ago

Hi,

This is the intended behavior, since you don't have the test set labels available. In order to evaluate your result on the test set you need to make a submission to our benchmark.

amiltonwong commented 4 years ago

Got it. Thanks