AILab-CVC / SEED

Official implementation of SEED-LLaMA (ICLR 2024).
https://ailab-cvc.github.io/seed
Other
576 stars 31 forks source link

Missing Multimodel Pretraining step #32

Open shubhamgarg21 opened 8 months ago

shubhamgarg21 commented 8 months ago

Hi,

For the paper https://arxiv.org/pdf/2310.01218.pdf , the following is mentioned in pretraining section :

For efficiency, we first train SEED-LLaMA using LoRA [32] tuning and together optimize the
parameters of the embedding layer and decoder head layer due to the added visual codes. We then
merge the parameters of LoRA onto the LLM backbone and fine-tune all parameters except for
the embedding layer.

But in the training steps, the part about fine-tuning all parameters except for the embedding layer is missing.

Cerf-Volant425 commented 5 months ago

Same question, could you please add the corresponding script of the fine-tuning part? Thanks in advance.