InternLM / xtuner

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
https://xtuner.readthedocs.io/zh-cn/latest/
Apache License 2.0
3.93k stars 308 forks source link

llava-phi3 模型转换到llava模型报错。求助,怎么解决? #641

Closed awzhgw closed 5 months ago

awzhgw commented 6 months ago

python Traceback (most recent call last): File "/export/App/training_platform/PinoModel/xtuner/xtuner/configs/llava/phi3_mini_4k_v16/convert_xtuner_weights_to_llava.py", line 99, in main() File "/export/App/training_platform/PinoModel/xtuner/xtuner/configs/llava/phi3_mini_4k_v16/convert_xtuner_weights_to_llava.py", line 94, in main convert_to_llava(args.text_model_id, args.vision_model_id, File "/export/App/training_platform/PinoModel/xtuner/xtuner/configs/llava/phi3_mini_4k_v16/convert_xtuner_weights_to_llava.py", line 80, in convert_to_llava model.load_state_dict(state_dict, strict=True, assign=True) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for LlavaLlamaForCausalLM: Missing key(s) in state_dict: "model.image_newline".

pppppM commented 6 months ago

@awzhgw convert_xtuner_weights_to_llava.py 这个脚本,只支持 llm 是 llama 结构的模型

我们马上会更新 phi3 的转换脚本,需要把 phi3 先转为 llama 才行