Closed fantasysee closed 3 years ago
Previously I thought the accuracy means Top5 accuracy, while there is a huge gap between my result following training methods in your paper, and the result reported in the paper.
If there is any training step I missed, please correct me.
Or maybe the reason why I received superior Top5 accuracy over yours in the paper, is that I trained 250 more epochs based on the pre-trained model?
Look forward to your reply! Thanks in advance.
Hi @fantasysee, "video classification accuracy" refers to Top-1 accuracy. Most probably you are getting clip accuracy, not video accuracy. Once your training is finished, you need to extra calculate video accuracy. For the calculation of video accuracy, scores for the non-overlapping consecutive clips are averaged. Check out the "Calculating Video Accuracy" part of the README.
Hi @okankop . Thank you very much for your warm and in-time reply!
I have followed the "Calculating Video Accuracy" part of the README, and calculate the video accuracy.
Nevertheless, the accuracy I measured is much lower than that in your paper.
Using the opt
listed above, the pre-trained and finetuned one achieve 51.84%, while it is reported 70.95% in your paper.
And the training-from-scratch one achieves 39.55%.
Would you please tell me what may cause the drop in the accuracy? Is there any other step I missed again? ;(
From clip accuracy to video accuracy usually the score increases around 15-20% on UCF dataset. Did you observe the same increase? If you cannot observe this increase, maybe video accuracy calculation is wrong.
No. I observe similar video accuracy compared with the clip accuracy ;( Thank you very much for your tip! It helps a lot!!!
Hi @okankop ,
Thanks very much for sharing such a wonderful repo!
I am a little bit confused about the metric "video classification accuracy" in your paper. I don't know it means top-1 accuracy or top-5 accuracy.
The confusion comes from my experiment results based on your repo.
Results on model MobileNetV1 with UCF-101 datasets