prophesee-ai / prophesee-automotive-dataset-toolbox

A set of Python scripts to evaluate the Automotive Datasets provided by Prophesee
Apache License 2.0
152 stars 30 forks source link

Problem with psee_evaluator #39

Closed raf329 closed 2 months ago

raf329 commented 2 months ago

When I run psee_evaluator.py I always get this error:

C:\Event_Camera\AI\prophesee-automotive-dataset-toolbox\src>python psee_evaluator.py gt_folder=C:\Event_Camera\AI\prophesee-automotive-dataset-toolbox\testfilelist02\gt dt_folder=C:\Event_Camera\AI\prophesee-automotive-dataset-toolbox\testfilelist02\2 There are 0 GT bboxes and 0 PRED bboxes creating index...
index created!
Loading and preparing results... Traceback (most recent call last): File "psee_evaluator.py", line 49, in main() File "psee_evaluator.py", line 45, in main evaluate_folders(opt.dt_folder, opt.gt_folder, opt.camera) File "psee_evaluator.py", line 36, in evaluate_folders evaluate_detection(gt_boxes_list, result_boxes_list) File "C:\Event_Camera\AI\prophesee-automotive-dataset-toolbox\src\metrics\coco_eval.py", line 52, in evaluate_detection _coco_eval(flattened_gt, flattened_dt, height, width, labelmap=classes) File "C:\Event_Camera\AI\prophesee-automotive-dataset-toolbox\src\metrics\coco_eval.py", line 109, in _coco_eval coco_pred = coco_gt.loadRes(results) File "C:\Users\X\AppData\Local\Programs\Python\Python38\lib\site-packages\pycocotools\coco.py", line 329, in loadRes if 'caption' in anns[0]: IndexError: list index out of range

I just put one .npy file from testfilelist02 dataset to C:\Event_Camera\AI\prophesee-automotive-dataset-toolbox\testfilelist02\gt C:\Event_Camera\AI\prophesee-automotive-dataset-toolbox\testfilelist02\2 folders.

And also I tryed to to evaluate my .npy (after passed .DAT file from dataset) file with .npy file from testfilelist02 , but anyway I receive this issue. Python version 3.8.10.

How to solve this issue?

Thanks!

lbristiel-psee commented 2 months ago

Hi,

There is no known issue with this psee_evaluator.py script. According to the error message we see, it seems you are callint the script either on empty files or on files for which there are no valid boxes. You could try with some other files and if you still hit the issue, try to executing the code step by step to understand where/why it fails.

Best, Laurent for Prophesee Support

raf329 commented 2 months ago

Hi,

There is no known issue with this psee_evaluator.py script. According to the error message we see, it seems you are callint the script either on empty files or on files for which there are no valid boxes. You could try with some other files and if you still hit the issue, try to executing the code step by step to understand where/why it fails.

Best, Laurent for Prophesee Support

Hi, Laurent!

I solved this issue. Problem was with command line which I used. Correct command line:

python psee_evaluator.py C:\Event_Camera\AI\prophesee-automotive-dataset-toolbox\testfilelist02\gt\ C:\Event_Camera\AI\prophesee-automotive-dataset-toolbox\testfilelist02\2\

Thanks a lot!

raf329 commented 2 months ago

Hi!

So I used as the same file .npy from dataset, as gt and dt file. Is it ok if I receive not AP 1.0 ?

I received these line: Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.306 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.324 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.316 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.333 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.288 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.365 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.702 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.958 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 1.000

Best Regards!

lbristiel-psee commented 2 months ago

Is it ok if I receive not AP 1.0 ?

yes, it is OK. The model shows varying levels of precision (AP) depending on the IoU thresholds and object sizes. What we see is that the recall is perfect (1.000) for up to 100 detections, indicating that the model can detect all relevant objects when many detections are allowed.

raf329 commented 2 months ago

Hi! Now I am testing my model, which was trained with 1M Dataset, after COCO_Evaluation I receive this:

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000

For evaluation I used only this files from dataset: moorea_2019-02-15_000_td_366500000_426500000_td.DAT moorea_2019-02-15_000_td_366500000_426500000_td.npy

I can not understand why it is. Low accuracy of my model? Do the timestamps of gt_data (from dataset) and dt_data (from data after my model) need to be equal ?

I also change time_tol parameter (from 50000 - 500000) in this line: def evaluate_detection(gt_boxes_list, dt_boxes_list, classes=("car", "pedestrian"), height=240, width=304, time_tol=50000). but i receive as the same results.

I saved data from dt_data and gt_data to files dt_dt.txt and gt_gt.txt (in attachment). dt_dt.txt gt_gt.txt

I trained my model with this class names: names: ['pedestrian', 'two wheeler', 'car', 'truck', 'bus', 'traffic sign', 'traffic light'].

Best Regards!

lbristiel-psee commented 2 months ago

I can not tell if the problem comes from your model or from the evaluation script. from what I see the evaluation script should take care of the timestamp shift but you can double-check yourself by looking at the source code. You can also browse/check the content of your detection file and compare it "manually" with the ground truth to see if the model works Ok or not.

raf329 commented 2 months ago

I can not tell if the problem comes from your model or from the evaluation script. from what I see the evaluation script should take care of the timestamp shift but you can double-check yourself by looking at the source code. You can also browse/check the content of your detection file and compare it "manually" with the ground truth to see if the model works Ok or not.

Yes, I just found out that accuracy of the model is not enough...

Thanks a lot, for your help and dataset from Prophesee company!!!

Best Regards!