Hi,
I am interested in fine-tuning ActionClip on my personal dataset. There isn't a dedicated section in the project page but I think I should follow this procedure: https://mmaction2.readthedocs.io/en/latest/user_guides/finetune.html by leveraging this line of code:
mim train mmaction configs/actionclip_vit-base-p32-res224-clip-pre_g8xb16_1x1x8_k400-rgb.py
However, I have a few doubts:
the configuration file that I should start from, hasn't the "head" that should be modified according to the "finetuning guide". I suppose it is due to the CLIP architecture at the base of ActionCLIP or am I missing anything?
how do I know which part is kept frozen? In the original paper, in Table 5, the authors show different fine-tuning procedures. I want to be sure of what I am training.
The doc issue
Hi, I am interested in fine-tuning ActionClip on my personal dataset. There isn't a dedicated section in the project page but I think I should follow this procedure: https://mmaction2.readthedocs.io/en/latest/user_guides/finetune.html by leveraging this line of code: mim train mmaction configs/actionclip_vit-base-p32-res224-clip-pre_g8xb16_1x1x8_k400-rgb.py
However, I have a few doubts:
Thank you in advance for your clarification!
Suggest a potential alternative/fix
No response