haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
19.59k stars 2.16k forks source link

[Usage] Inference Issue using latest deep speed script On llama-2 fine-tuned model #288

Open findalexli opened 1 year ago

findalexli commented 1 year ago

When did you clone our code?

I cloned the code base after 5/1/23

Describe the issue

Issue:

I am trying to run some inference using the provided llama-2 13-chat fine-tuned model which I downloaded from huggingface and placed in a checkpoints folder. I ran into some other issue using the older model_vqa script (the image process is NULL) , so switched to this script

Command:

python -m llava.eval.model_vqa_ds     --model-path /home/ubuntu/LLaVA/checkpoints/llava-llama-2-13b-chat-lightning-preview 

Log:


/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/ops/csrc/transformer/inference/csrc/pt_binding.cpp:1571:72: warning: narrowing conversion of ‘mlp_1_out_neurons’ from ‘const size_t’ {aka ‘const long unsigned int’} to ‘long int’ [-Wnarrowing]
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1893, in _run_ninja_build
    subprocess.run(
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/ubuntu/LLaVA/llava/eval/model_vqa_ds.py", line 113, in <module>
    eval_model(args)
  File "/home/ubuntu/LLaVA/llava/eval/model_vqa_ds.py", line 39, in eval_model
    model = deepspeed.init_inference(model, mp_size=1, dtype=torch.half, replace_with_kernel_inject=True)
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/__init__.py", line 333, in init_inference
    engine = InferenceEngine(model, config=ds_inference_config)
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/inference/engine.py", line 192, in __init__
    self._apply_injection_policy(config)
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/inference/engine.py", line 426, in _apply_injection_policy
    replace_transformer_layer(client_module, self.module, checkpoint, config, self.config)
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 537, in replace_transformer_layer
    replaced_module = replace_module(model=model,
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 780, in replace_module
    replaced_module, _ = _replace_module(model, policy, state_dict=sd)
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 861, in _replace_module
    _, layer_id = _replace_module(child,
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 861, in _replace_module
    _, layer_id = _replace_module(child,
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 837, in _replace_module
    replaced_module = policies[child.__class__][0](child,
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 514, in replace_fn
    new_module = replace_with_policy(child,
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 348, in replace_with_policy
    _container.create_module()
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/module_inject/containers/llama.py", line 36, in create_module
    self.module = DeepSpeedGPTInference(_config, mp_group=self.mp_group)
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_gpt.py", line 20, in __init__
    super().__init__(config, mp_group, quantize_scales, quantize_groups, merge_count, mlp_extra_grouping)
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/model_implementations/transformers/ds_transformer.py", line 54, in __init__
    inference_module = builder.load()
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 454, in load
    return self.jit_load(verbose)
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/deepspeed/ops/op_builder/builder.py", line 497, in jit_load
    op_module = load(name=self.name,
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1284, in load
    return _jit_compile(
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1509, in _jit_compile
    _write_ninja_file_and_build_library(
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1624, in _write_ninja_file_and_build_library
    _run_ninja_build(
  File "/home/ubuntu/mambaforge-pypy3/envs/llava/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1909, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension 'transformer_inference'```

Screenshots:
You may attach screenshots if it better explains the issue.
haotian-liu commented 1 year ago

Hi, the deepspeed inference script was committed accidentally when I was debugging it. Please use model_vqa for now.

Unfortunately, I do not find using deepspeed inference help with the speed so still investigating. If you have any insights regarding this, please kindly share. Thanks!