naraysa / 3c-net

Weakly-supervised Action Localization
49 stars 9 forks source link

Evaluation Code Problem #1

Closed Flowerfan closed 4 years ago

Flowerfan commented 4 years ago

Hi, Sanath

I ran the code recently using I3D feature and got the same result reported in the paper with map@iou0.5 = 26.6 on THUMOS14 dataset. However, I saved the predictions and used the official code for evaluation, I got 23. Are the results reported in the paper using the evaluation code in detectionMAP.py or the official code? Could please release the model checkpoint if you are using the official code?

Thank you Fan

naraysa commented 4 years ago

Hi Fan,

Nice to know you could get the same results as us. For evaluation, we are using the detectionMAP.py file only. One reason for the lower value by the official code could be that it expects the confidence scores to be in [0,1] range. However, it is not in this range for the detections in our model.

Flowerfan commented 4 years ago

It seems that the evaluation code only calculates the Iou scores between the predicted segments and ground truth segments while the classes for the segments are ignored.

naraysa commented 4 years ago

Yes, our evaluation code ignores the predicted segments of classes that do not cross a particular threshold. This removes a good number of false positives.