Closed thisurawz1 closed 2 months ago
How do you use all linear layers with LoRA adapters? like q_proj, k_proj, v_proj, o_proj, w1, w2, w3, and lm_head. and what are the layers that are finetuned with the current QLoRA script?
Except vision encoder, all of the layers will be finetuned. The training parameters of Lora or QLora are same.
ok understood. thanks
How do you use all linear layers with LoRA adapters? like q_proj, k_proj, v_proj, o_proj, w1, w2, w3, and lm_head. and what are the layers that are finetuned with the current QLoRA script?