Closed IgnacioSan22 closed 9 months ago
For the Slowfast features, I recommend you to look at Slowfast repository. Once you extract the video features, you can simply concatenate CLIP and Slowfast features. To use them in the run_on_video, you should replace the checkpoint with the checkpoint in the readme.
For the Slowfast features, I recommend you to look at Slowfast repository. Once you extract the video features, you can simply concatenate CLIP and Slowfast features. To use them in the run_on_video, you should replace the checkpoint with the checkpoint in the readme.
There are a lot of pretrained versions in SlowFast repository, I supose that I have to use the same one that you use to train otherwise it won't reproduce the results.
Which version in https://github.com/facebookresearch/SlowFast/blob/main/MODEL_ZOO.md should I use?
Hello. For that kind of issue, I recommend you to ask authors of moment-detr since we used the features they provided. Thank you.
Thank you very much, I found the version: Is the one used here https://github.com/linjieli222/HERO_Video_Feature_Extractor/blob/main/slowfast/configs/Kinetics/c2/SLOWFAST_8x8_R50.yaml.
I have another question, related to inference. If I run the model a few times, the results are different, and in some cases the predictions are much better for the same query and input. Is there any way of removing the randomness?
I guess the reason for such variances are from the dropout rate.
If then there are two options. First one is to set the static seed And the second is to change the dropout rate to 0.
Thanks.
I read some other issues where you comment that the videoonly model has been trained with clip+slowfast features. Can you explain how to extract the features with slowfast and agregate them to clip ones in the run_on_video.py script?
Thanks in advance!