airsplay / lxmert

PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".
MIT License
923 stars 157 forks source link

Freeze the model #78

Open titaiwangms opened 4 years ago

titaiwangms commented 4 years ago

Hi, I am wondering if you freeze the pre-trained part of parameters, and only fine-tune on the downstream task with head? It seems like all parameters are tuned during the fine-tune? Thanks!

yikuan8 commented 4 years ago

When fine-tuning VQA, all parameters are trainable. If you only want to adjust the weights of cls head, you can freeze the lxrtencoder in src/tasks/vqa_model.