Closed gurkirt closed 2 years ago
We provide the best checkpoint with the best results and report the mAP of it, we are going to double-check and test this checkpoint. Besides, I'm not sure what's the reason for inferior precision in your experiments, maybe you can try to use batch size 6 with half learning rate.
Thanks for the suggestion. I think I am going to switch back to pyslowfast because of another issue about evaluation comparison.
I also got a lower results (reported: 27.8 mAP, what I got: 27.1 mAP) for this config: https://github.com/open-mmlab/mmaction2/blob/master/configs/detection/acrn/slowfast_acrn_kinetics_pretrained_r50_8x8x1_cosine_10e_ava22_rgb.py
Does Pytorch version matter to the performance? Since I saw a warning:
UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
To be honest, I did not try what @kennymckormick mentioned but I doubt that will actual results in anything better. I switched back to pyslowfast because of another reason, https://github.com/open-mmlab/mmaction2/issues/1268. I get better numbers in the untrimmed setting using pyslowfast. Untrimmed setting what is being shown in papers. So trimmed setting which is being used here is not useful for my case.
If you feel we have help you, give us a STAR! :satisfied: Done!
Notice
There are several common situations in the reimplementation issues as below
I am interested in issue 1.
First of all, thank you for the amazing code base.
I am interested in Spatiotemporal detection models on AVA dataset. I followed your dataset instructions and used the provided and was able to get a test time mAP of 26.27 with the given model. I used AVAv2.2 model, specifically, slowfast_temporal_max_kinetics_pretrained_r50_8x8x1_cosine_10e_ava22_rgb. Where reported is 26.4 seems like the best mAP after epoch 8 rather than at the end of complete training. I am not sure which epoch checkpoint you released in the model zoo. Nevertheless, I think it is reasonable could because of the environment and Pytorch version differences.
However when I train, the same model, I get 25.73 as the best mAP after the 8th epoch. I don't know what to attribute that. Only difference from existing was I used 4 titanX with 24GB each so I could set 12 examples per GPU while train and training it on 4 GPU without changing anything else.
Here is my environment:
Accidentally, I deleted the training log, but I could evaluation the log for both my model and your released one.