woven-visionai / wts-dataset

29 stars 1 forks source link

the evaluation can not successfully run with the example testdata #5

Open sherlock666 opened 9 months ago

sherlock666 commented 9 months ago

without any code change i got:

Error: ZeroDivisionError('division by zero') Traceback (most recent call last): File "g:/aicity-track2/evaluation/eval-metrics-AIC-Track2/metrics.py", line 222, in metrics_pedestrian_mean = compute_mean_metrics(metrics_pedestrian_overall, num_segments_overall) File "g:/aicity-track2/evaluation/eval-metrics-AIC-Track2/metrics.py", line 174, in compute_mean_metrics metrics_mean[metric_name] /= num_segments_overall ZeroDivisionError: division by zero

woven-visionai commented 9 months ago

did you use our example data and example gt file for this test, we test again and can not reproduce this error.

sherlock666 commented 9 months ago

yes i use the json inside the testdata (pred_different.json, pred_identical.json) i even print out the pred_all gt_all to make sure that the path is correct (without changing other thing)

update: i change the GROUND_TRUTH_DIR_PATH to correct one and still:

Namespace(pred='aicity-track2/evaluation/eval-metrics-AIC-Track2/testdata/pred_different.json')
Scenario gt\20230707_8_SN46_T1\overhead_view\20230707_8_SN46_T1 exists in ground-truth but not in predictions.Counting zero score for this scenario.
Scenario gt\20230728_16_SY19_T1\overhead_view\20230728_16_SY19_T1 exists in ground-truth but not in predictions.Counting zero score for this scenario.
Error: ZeroDivisionError('division by zero')
Traceback (most recent call last):
  File "g:/aicity-track2/evaluation/eval-metrics-AIC-Track2/metrics.py", line 218, in <module>
    metrics_pedestrian_mean = compute_mean_metrics(metrics_pedestrian_overall, num_segments_overall)
  File "g:/aicity-track2/evaluation/eval-metrics-AIC-Track2/metrics.py", line 170, in compute_mean_metrics
    metrics_mean[metric_name] /= num_segments_overall
ZeroDivisionError: division by zero
woven-visionai commented 9 months ago

Could you please check you are using the nltk>=3.8?

We also updated our evaluation code for the challenge purpose. Could you please have a try?

woven-visionai commented 9 months ago

Hi, thanks for sharing this issue. In order to reproduce your issue on my side, could you please share below?

From my side, in order to test with the toy test data, I use the latest evaluation code repo and run python metrics_test.py --pred testdata/pred_identical.json from folder evaluation/eval-metrics-AIC-Track2. I currently don't see any error from this command.