LittlePey / SFD

Sparse Fuse Dense: Towards High Quality 3D Detection with Depth Completion (CVPR 2022, Oral)
Apache License 2.0
264 stars 35 forks source link

Problem in the results after testing. #14

Closed NNtamp closed 1 year ago

NNtamp commented 1 year ago

Hello, I managed to test the model in the testing set of KITTY 3D and the results were actually pretty good in terms of evaluation metrics. Also, I managed to visualize the 3D Kitty dataset using this repo (https://github.com/kuixu/kitti_object_vis).

The main problem is that if I try to visualize the results in the testing set I am taking back pretty inaccurate visualizations so I am wondering if the numbers in the predicted txt files are not correct (they are not connected with the images numbers,velodyne numbers etc.). For example find the calib, image_2 and velodyne for the picture 000001 from the testing set attached. Also find attached the prediction created with the same number (000001.txt) after the evaluation of the code in the testing set (all in the zip file). If I try to visualise those with the above repo I am getting the below image which is pretty inaccurate. Do you have any expalanation for this?

Also to note that the evaluation is not taking place in all of the testing images but in around half of them (e.g. 000000,000003,000007 etc. are missing).

output_000001_bboxes

Issue -testing files.zip

LittlePey commented 1 year ago

Hi, did you modify data split and info path when you inference on kitti test set? You should modify them to 'test': test and 'test': [kitti_infos_test.pkl], respectively.

NNtamp commented 1 year ago

Hi, did you modify data split and info path when you inference on kitti test set? You should modify them to 'test': test and 'test': [kitti_infos_test.pkl], respectively.

Thank you @LittlePey for this. Are those 2 points the only 2 points we have to modify in order to perform the evaluation in test set? We have applied the changes and we didn't have results. Again the evaluation was performed in validation set.

LittlePey commented 1 year ago

Oh sorry, you need to modify kitti_dataset.yaml instead of kitti_dataset_sfd.yaml because sfd.yaml use kitti_dataset.yaml as base config instead of kitti_dataset_sfd.yaml as see in line4.

NNtamp commented 1 year ago

Hi @LittlePey once again. Indeed after your advice, we managed to test the code in the test set and now the visualization results seem accurate! Thank you once more for this. 2 follow-up questions please:

  1. We received zero's as evaluation metrics as shown below. Do you know why?

* Performance of EPOCH 34 *** 2022-08-08 13:30:49,292 INFO Run time per sample: 0.0772 second. 2022-08-08 13:30:49,292 INFO Generate label finished(sec_per_example: 0.0779 second). 2022-08-08 13:30:49,292 INFO recall_roi_0.3: 0.000000 2022-08-08 13:30:49,292 INFO recall_rcnn_0.3: 0.000000 2022-08-08 13:30:49,292 INFO recall_roi_0.5: 0.000000 2022-08-08 13:30:49,292 INFO recall_rcnn_0.5: 0.000000 2022-08-08 13:30:49,292 INFO recall_roi_0.7: 0.000000 2022-08-08 13:30:49,292 INFO recall_rcnn_0.7: 0.000000 2022-08-08 13:30:49,296 INFO Average predicted number of objects(7518 samples): 5.289 2022-08-08 13:30:49,546 INFO None 2022-08-08 13:30:49,546 INFO Result is save to /workspace/SFD/output/kitti_models/sfd/default/eval/epoch_34/test/default 2022-08-08 13:30:49,546 INFO
* Evaluation done.***

  1. Is the provided checkpoint trained only in detecting the class of cars? If not, is there a way of detecting the rest of the classes in the test set?
LittlePey commented 1 year ago

Hi, 1.KITTI doesn't provide annotations for the test set, so the eval codes can't calculate AP for test set, resulting in zero metrics. 2.The provided checkpoint is trained only in detecting the class of cars. If you want to detect other classes, you can modify sfd.yaml according to voxel_rcnn_3classes.yaml and train sfd by yourself.

NNtamp commented 1 year ago

Thank you so much @LittlePey

faziii0 commented 7 months ago

Hi @LittlePey once again. Indeed after your advice, we managed to test the code in the test set and now the visualization results seem accurate! Thank you once more for this. 2 follow-up questions please:

  1. We received zero's as evaluation metrics as shown below. Do you know why?

* Performance of EPOCH 34 *** 2022-08-08 13:30:49,292 INFO Run time per sample: 0.0772 second. 2022-08-08 13:30:49,292 INFO Generate label finished(sec_per_example: 0.0779 second). 2022-08-08 13:30:49,292 INFO recall_roi_0.3: 0.000000 2022-08-08 13:30:49,292 INFO recall_rcnn_0.3: 0.000000 2022-08-08 13:30:49,292 INFO recall_roi_0.5: 0.000000 2022-08-08 13:30:49,292 INFO recall_rcnn_0.5: 0.000000 2022-08-08 13:30:49,292 INFO recall_roi_0.7: 0.000000 2022-08-08 13:30:49,292 INFO recall_rcnn_0.7: 0.000000 2022-08-08 13:30:49,296 INFO Average predicted number of objects(7518 samples): 5.289 2022-08-08 13:30:49,546 INFO None 2022-08-08 13:30:49,546 INFO Result is save to /workspace/SFD/output/kitti_models/sfd/default/eval/epoch_34/test/default 2022-08-08 13:30:49,546 INFO * Evaluation done.***

  1. Is the provided checkpoint trained only in detecting the class of cars? If not, is there a way of detecting the rest of the classes in the test set?

Hi I also get the same error as on training set of 7481 it shows AP for about 90%. When I run evaluation on testset it shows all 0 zeros but it generates prediction files for all 7518 images. Can you tell me what’s the problem. Any help would be appreciated. Thanks