OpenBMB / MiniCPM-V

MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone
Apache License 2.0
7.86k stars 547 forks source link

[BUG] resampler frozen while LoRA finetuning #243

Open Fr0do opened 1 month ago

Fr0do commented 1 month ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

With default config resampler is neither a LoRA module nor unfrozen for full finetuning. Seems to be the fixable by adding modules_to_save=['resampler'], to LoraConfig.

期望行为 | Expected Behavior

Resampler is finetuned during LoRA training.

复现方法 | Steps To Reproduce

run finetune_lora.sh

运行环境 | Environment

requirements.txt

备注 | Anything else?

No response

LDLINGLINGLING commented 3 days ago

if training_args.use_lora: if training_args.use_lora and training_args.tune_llm: raise ValueError("The model cannot simultaneously adjust LLM parameters and apply LoRA.")

    rank0_print("Currently using LoRA for fine-tuning the MiniCPM-V model.")
    for name, param in model.llm.named_parameters():
        param.requires_grad = False
    lora_config = LoraConfig(
        r=lora_args.lora_r,
        lora_alpha=lora_args.lora_alpha,
        target_modules=lora_args.lora_target_modules,
        lora_dropout=lora_args.lora_dropout,
        bias=lora_args.lora_bias,
        layers_to_transform=lora_args.lora_layers_to_transform,
        task_type="CAUSAL_LM",
    )
    if not hasattr(model, 'get_input_embeddings'):
        def get_input_embeddings(self):
            return self.llm.get_input_embeddings()
        model.get_input_embeddings = MethodType(get_input_embeddings, model)
    if lora_args.q_lora:
        model = prepare_model_for_kbit_training(
            model, use_gradient_checkpointing=training_args.gradient_checkpointing
        )
    model = get_peft_model(model, lora_config)
    model.base_model.resampler.requires_grad_(True)

This is the new code I saw. The resample module is fully fine-tuned in lora, and the language model is highly efficient fine-tuning in lora.