Closed cooper12121 closed 10 months ago
Right, the parameters of LLMs are kept frozen during both pre-training and fine-tuning.
Right, the parameters of LLMs are kept frozen during both pre-training and fine-tuning.
Translate: Thank you for your reply. Have you tested some multimodal tasks such as multimodal NER and relation extraction? How does only fine-tuning Q-former perform when llama parameters are frozen?
"During pre-training and fine-tuning, are the parameters of the llama frozen?"