traveller59 / second.pytorch

SECOND for KITTI/NuScenes object detection
MIT License
1.72k stars 721 forks source link

How to test pointpillar model in KITTI testset? #187

Open liqisa opened 5 years ago

liqisa commented 5 years ago

I has trained pointpillar model.and executed python train.py evaluate .. and got this: `middle_class_name PointPillarsScatter Restoring parameters from model/voxelnet-306241.tckpt remain number of infos: 3769 Generate output labels... [100.0%][===================>][8.77it/s][03:49>00:00]
generate label finished(16.39/s). start eval: avg forward time per example: 0.028 avg postprocess time per example: 0.014 Car AP@0.70, 0.70, 0.70: bbox AP:90.57, 88.81, 87.41 bev AP:89.70, 86.77, 84.34 3d AP:85.07, 75.90, 69.53 aos AP:90.28, 88.09, 86.34 Car AP@0.70, 0.50, 0.50: bbox AP:90.57, 88.81, 87.41 bev AP:90.76, 89.94, 89.25 3d AP:90.67, 89.75, 88.94 aos AP:90.28, 88.09, 86.34

Car coco AP@0.50:0.05:0.95: bbox AP:70.22, 66.31, 64.91 bev AP:69.57, 66.20, 64.50 3d AP:57.58, 54.04, 51.76 aos AP:69.99, 65.78, 64.13 ` So how can I test my model in KITTI Test dataset. Like results in paper.KITTI test 3D detection benchmark and KITTI test BEV detection benchmark.

Thank for your work!

liqisa commented 5 years ago

Must submit result to kitti official page?Any other resolutions?

traveller59 commented 5 years ago

You must submit a zip file which contains kitti label files to kitti test server. This result is bad (if you are using xyres_16), I will update configs to use bug-fixed PillarFeatureNet layer.

peiyunh commented 5 years ago

Hi Yan, has this issue been fixed yet? I recently tried training with configs/pointpillars/car/xyres_16.config and saw similar results. Below are the numbers:

#################################

EVAL

################################# Generate output labels... [100.0%][===================>][16.30it/s][02:01>00:00] generate label finished(30.91/s). start eval: Evaluation official Car AP(Average Precision)@0.70, 0.70, 0.70: bbox AP:90.64, 89.03, 87.71 bev AP:89.91, 87.00, 84.31 3d AP:85.37, 76.41, 69.96 aos AP:0.47, 1.10, 1.90 Car AP(Average Precision)@0.70, 0.50, 0.50: bbox AP:90.64, 89.03, 87.71 bev AP:90.79, 89.94, 89.22 3d AP:90.79, 89.83, 88.99 aos AP:0.47, 1.10, 1.90

Evaluation coco Car coco AP@0.50:0.05:0.95: bbox AP:71.34, 67.14, 65.74 bev AP:69.34, 65.89, 64.33 3d AP:59.03, 54.49, 52.13 aos AP:0.34, 0.83, 1.41

Do they look like lower than what you would've expected?

I have a related question. As you mentioned that PointPillars was developed based on your v1.0 codebase. Has there been any change that would affect the accuracies quite a bit? Would you recommend using their code in order to reproduce the numbers for that paper?

Thanks!

qchenclaire commented 5 years ago

@peiyunh That's not similar results to the author's. I got similar results as yours. The aos is close to 0 while the results from pretrained model is very high(~90). That's very weird.

HenryJunW commented 4 years ago

Got similar results, which are lower than the expected or not? except the aos results.