InternLM / InternLM-XComposer

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
2.14k stars 133 forks source link

PLoRA disappears when applying lora. #173

Closed yunsangju closed 4 months ago

yunsangju commented 4 months ago

hello. When I apply lora to the decoder, the existing model structure, PLoRA, is converted to linear. I don't think this will utilize the existing PLoRA that you have learned. Is this okay?

yuhangzang commented 4 months ago

This issue has been fixed by the commit 3c522e7. You can pull the latest version of fine-tune code to check if the problem still exists.

myownskyW7 commented 4 months ago

@yunsangju

yunsangju commented 4 months ago

@yuhangzang @myownskyW7 hello. When I proceed with the code below, PLoRA still disappears.

for name, param in model.model.named_parameters():
            param.requires_grad = False
        lora_config = LoraConfig(
            r=lora_args.lora_r,
            lora_alpha=lora_args.lora_alpha,
            target_modules=lora_args.lora_target_modules,
            lora_dropout=lora_args.lora_dropout,
            bias=lora_args.lora_bias,
            task_type='CAUSAL_LM',
        )

        model = get_peft_model(model, lora_config)