haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
20.52k stars 2.27k forks source link

[Usage] Merging LoRa weights into llava-13b fails with bizarre error #1552

Open maxall41 opened 5 months ago

maxall41 commented 5 months ago

Describe the issue

Issue:

I have fine-tuned liuhaotian/llava-v1.5-13b on an OCR task using LoRa. I am now trying to use this model for inference but when I try to merge the LoRa weights it throws a bizarre error both saying that the Llava configs (LlavaConfig, LlavaMptConfig, LlavaMistralConfig) are installed and simultaneously that they don't exist. I am using the latest version of the repo (Commit: c121f04) and the default conda environment, with the only difference being that i installed protobuf because it threw an error if it wasn't installed. I have been able to replicated this across multiple cloud machines.

transformers==4.37.2 and tokenizers==0.15.1

Command:

python scripts/merge_lora_weights.py \
    --model-path ./checkpoint-200/ \
    --model-base 'liuhaotian/llava-v1.5-13b' \
    --save-model-path merge_model 

Log:

Traceback (most recent call last):
  File "/home/shadeform/LLaVA/scripts/merge_lora_weights.py", line 22, in <module>
    merge_lora(args)
  File "/home/shadeform/LLaVA/scripts/merge_lora_weights.py", line 8, in merge_lora
    tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name, device_map='cpu')
  File "/home/shadeform/LLaVA/llava/model/builder.py", line 128, in load_pretrained_model
    model = AutoModelForCausalLM.from_pretrained(model_base, low_cpu_mem_usage=True, **kwargs)
  File "/home/shadeform/miniconda3/envs/llava/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 569, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, LlavaConfig, LlavaMptConfig, LlavaMistralConfig.

Notice that according to the error LlavaConfig, LlavaMptConfig, LlavaMistralConfig are installed.

NandhaKishorM commented 5 months ago

add llava in the folder name

sumit-mahaseel commented 5 months ago

where exactly? @NandhaKishorM

HenryJunW commented 4 months ago

@sumit-mahaseel same issue, did you fix it?