OpenGVLab / UniFormerV2

[ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer
https://arxiv.org/abs/2211.09552
Apache License 2.0
289 stars 17 forks source link

Usage of LHMRA #1

Closed vateye closed 1 year ago

vateye commented 1 year ago

I have noticed that 'NO_LMHRA' is enabled in most of the experiments such as K400/K600/K700/K710. Why we should not use LMHRA in the model since it is intuitive that the local temporal cue should be explored during the training.

Andy1621 commented 1 year ago

Thanks for your good question! As shown in our ablation, local MHRA and temporal downsampling only work for those temporal-related datasets (i.e., Sth-Sth). For those scene-related datasets (e.g., Kinetics), they bring little or no improvement. We hope to give the simplest model, which is helpful for scaling up the model, thus we only add a 4-layer global MHRA in most of our models. Such designs have been demonstrated simple yet effective in our experiments.

image
vateye commented 1 year ago

Thanks for your reply. Besides, I have some questions about the initialization and the training. Since you introduce new parameters to the original ViT, I did not see anywhere you initialize the new parameters with Xavier or Normal (not those initialized with zeros). Meanwhile, the new parameters seem to be share the same learning rate with the ViT, I am wondering why not use larger learning rate for those parameters.

Andy1621 commented 1 year ago

For most of the new parameters, I just use the default initialization. In fact, the default initialization in PyTorch is enough. You can read the source code for those layers in PyTorch.

For some special layers, I initialize them with zeros, including the last point-wise convolutions in the local temporal MHRA, the query tokens and output projection layers in the query-based cross MHRA, the last linear layers in the FFN of the global UniBlock, and the learnable fusion weights.

https://github.com/OpenGVLab/UniFormerV2/blob/6a678d0b82435fd7f86757cedc35de00bb28ced3/slowfast/models/uniformerv2_model.py#L53-L56

https://github.com/OpenGVLab/UniFormerV2/blob/6a678d0b82435fd7f86757cedc35de00bb28ced3/slowfast/models/uniformerv2_model.py#L153-L159

https://github.com/OpenGVLab/UniFormerV2/blob/6a678d0b82435fd7f86757cedc35de00bb28ced3/slowfast/models/uniformerv2_model.py#L219

Such zero initialization can maintain the original input, thus making the training stable at first. More importantly, I use a relatively small learning rate (e.g., 2e-5) compared to the previous work. Thus I don't need to decrease/increase the learning rate for those new parameters. BTW, in my experiments, changing the learning rate scale does not bring any improvement.

vateye commented 1 year ago

Thanks!