Closed Flowerfan closed 4 years ago
Hi Fan,
Nice to know you could get the same results as us. For evaluation, we are using the detectionMAP.py file only. One reason for the lower value by the official code could be that it expects the confidence scores to be in [0,1] range. However, it is not in this range for the detections in our model.
It seems that the evaluation code only calculates the Iou scores between the predicted segments and ground truth segments while the classes for the segments are ignored.
Yes, our evaluation code ignores the predicted segments of classes that do not cross a particular threshold. This removes a good number of false positives.
Hi, Sanath
I ran the code recently using I3D feature and got the same result reported in the paper with map@iou0.5 = 26.6 on THUMOS14 dataset. However, I saved the predictions and used the official code for evaluation, I got 23. Are the results reported in the paper using the evaluation code in detectionMAP.py or the official code? Could please release the model checkpoint if you are using the official code?
Thank you Fan