Closed xmy0916 closed 10 months ago
Yes, this is just reimplementation of our model. You can enable it during training and you should implement layer-wise learning rate by yourself.
Yes, this is just reimplementation of our model. You can enable it during training and you should implement layer-wise learning rate by yourself.
Thanks for your great job and answer~
I found code here: https://github.com/X-PLUG/mPLUG-Owl/blob/main/mPLUG-Owl2/scripts/finetune.sh#L32
you have freezed the vision backbone, but according to your paper, it seems all the parrams have been trained?