PRBonn / semantic-kitti-api

SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
http://semantic-kitti.org
MIT License
755 stars 186 forks source link

On the issue of uploading test set prediction labels to codalab #144

Closed completezealous closed 7 months ago

completezealous commented 7 months ago

Hello! I uploaded semanticKitti's test set predicted labels 11-21 to codalab, The result zip file has passed the validation of validate_submission.py in SemanticKITTI API. ''' (semanticvisualize) lk@sun:~/semantic-kitti-api$ ./validate_submission.py --task segmentation /home/lk/sequences_2.zip /data/SemanticKITTI/dataset Validating zip archive "/home/lk/sequences_2.zip".

============ segmentation ============

  1. Checking filename.............. ✓
  2. Checking directory structure... ✓
  3. Checking file sizes............ ✓

Everything ready for submission! ''' I successfully used semantickitt-api, Visualizes test sets 11-21 with predicted labels, which means my zip file structure is OK and the predicted labels are valid, but I still fail to upload to codalab. (The download file from codalab is also complete) visualizes display: image

display error: image the error is the same all four times. I honestly don't know why this is happening, because the validate_submitt.py and visualization above show that the file structure and labels I uploaded are valid.

jbehley commented 7 months ago

Sorry for the inconvenience, I will have a look what might be wrong.

What I don't check: the number of files. so if you have additional files in the folders that might trigger the condition.

Note that failed submissions don't count towards the max number of submissions.

completezealous commented 7 months ago

Sorry for the inconvenience, I will have a look what might be wrong.

What I don't check: the number of files. so if you have additional files in the folders that might trigger the condition.

Note that failed submissions don't count towards the max number of submissions.

Thank you for your reply, but there are no additional files in the folder

completezealous commented 7 months ago

Sorry for the inconvenience, I will have a look what might be wrong.

What I don't check: the number of files. so if you have additional files in the folders that might trigger the condition.

Note that failed submissions don't count towards the max number of submissions.

Now,I also used evaluate_semantics.py in your semantic-kitti-api to verify the predicted labels of the test set 11-21. Of course, I did not have the real labels of the test set, and I used the predicted labels as the real labels. Although the final iou is not correct, the side also shows that all my predicted labels are valid. However, when my predicted labels are uploaded to codalab for prediction, I still get the same error as before. Is it because your code misjudged that my labels are training set labels, which leads to an inconsistent number of them? Or is it something else? image In addition, I hope you add code to evaluate_semantics.py in codalab to print the number of labels and predict the number of labels.

jbehley commented 7 months ago

It seems like you are missing some files in your local KITTI copy.

image

We are currently just checking if the label files are consistent with your bin files.

But I will modify the script to make this more explicit also on the codalab evaluation script.

jbehley commented 7 months ago

I added no some more diagnostics to the evaluation script that puts out some output like this (this is for one of your submissions):

image

Thus, there seem to be some missing files from extracting the original KITTI data. (Or some label files don't get written.)

completezealous commented 7 months ago

I added no some more diagnostics to the evaluation script that puts out some output like this (this is for one of your submissions):

image

Thus, there seem to be some missing files from extracting the original KITTI data. (Or some label files don't get written.)

Thanks for your work and your hint that this is indeed the reason: when I initially uncompressed the semantickitt dataset, there were missing files, which means I have been training with the missing dataset!🤣