modelscope / ms-swift

Use PEFT or Full-parameter to finetune 300+ LLMs or 80+ MLLMs. (Qwen2, GLM4v, Internlm2.5, Yi, Llama3.1, Llava-Video, Internvl2, MiniCPM-V-2.6, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
https://swift.readthedocs.io/zh-cn/latest/Instruction/index.html
Apache License 2.0
3.41k stars 292 forks source link

llava-llama-3-8b-v1_1 AttributeError: 'NoneType' object has no attribute 'get_output_embeddings' #1911

Open thisiskofi opened 1 week ago

thisiskofi commented 1 week ago

Describe the bug

model_type="llava-llama-3-8b-v1_1"
CUDA_VISIBLE_DEVICES=0 swift infer \
    --model_type $model_type \
    --infer_backend lmdeploy

Error:

  File "/home/kboakye/code/swift/swift/cli/infer.py", line 5, in <module>
    infer_main()
  File "/home/kboakye/code/swift/swift/utils/run_utils.py", line 32, in x_main
    result = llm_x(args, **kwargs)
  File "/home/kboakye/code/swift/swift/llm/infer.py", line 286, in llm_infer
    llm_engine, template = prepare_lmdeploy_engine_template(args)
  File "/home/kboakye/code/swift/swift/llm/utils/lmdeploy_utils.py", line 435, in prepare_lmdeploy_engine_template
    lmdeploy_engine = get_lmdeploy_engine(
  File "/home/kboakye/code/swift/swift/llm/utils/lmdeploy_utils.py", line 96, in get_lmdeploy_engine
    lmdeploy_engine = pipeline(model_dir, backend_config=backend_config, **pipeline_kwargs)
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/lmdeploy/api.py", line 89, in pipeline
    return pipeline_class(model_path,
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/lmdeploy/serve/vl_async_engine.py", line 21, in __init__
    self.vl_encoder = ImageEncoder(model_path,
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/lmdeploy/vl/engine.py", line 85, in __init__
    self.model = load_vl_model(model_path, backend_config=backend_config)
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/lmdeploy/vl/model/builder.py", line 55, in load_vl_model
    return module(**kwargs)
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/lmdeploy/vl/model/base.py", line 31, in __init__
    self.build_model()
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/lmdeploy/vl/model/llava_hf.py", line 35, in build_model
    load_checkpoint_and_dispatch(
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/accelerate/big_modeling.py", line 604, in load_checkpoint_and_dispatch
    device_map = infer_auto_device_map(
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1183, in infer_auto_device_map
    if check_tied_parameters_in_config(model) and len(tied_parameters) == 0:
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 574, in check_tied_parameters_in_config
    and model.get_output_embeddings()
  File "/home/kboakye/miniconda3/envs/ms-swift/lib/python3.10/site-packages/transformers/models/llava/modeling_llava.py", line 259, in get_output_embeddings
    return self.language_model.get_output_embeddings()
AttributeError: 'NoneType' object has no attribute 'get_output_embeddings'

According to the supported models list, this model should be supported by lmdeploy.

Jintao-Huang commented 1 week ago

You can use vllm for inference. lmdeploy does not support this; I will update the documentation.

thisiskofi commented 1 week ago

Interesting. I'm able to run pure lmdeploy (no swift) on this model: xtuner/llava-llama-3-8b-v1_1-hf on Huggingface. I'm curious what the difference is.