open-compass / VLMEvalKit

Open-source evaluation toolkit of large vision-language models (LVLMs), support ~100 VLMs, 40+ benchmarks
https://huggingface.co/spaces/opencompass/open_vlm_leaderboard
Apache License 2.0
1.08k stars 154 forks source link

qwenvl2 run.py 无法一机多卡,每卡一个模型,并行推理一个测评 #479

Closed M3Dade closed 3 days ago

M3Dade commented 1 week ago

当我运行

torchrun --nproc-per-node=8 run.py --data DocVQA_TEST --model Qwen2-VL-2B-Instruct --verbose

出现以下错误

[{'role': 'user', 'content': [{'type': 'image', 'image': '/vlmeval/images/DocVQA_TEST/57348.jpg', 'min_pixels': 1003520, 'max_pixels': 12845056}, {'type': 'text', 'text': "What is the % of 'Providers of Capital' in the year 2010 based on 'Distribution of Value-Added' graph?\nPlease try to answer the question with short words or phrases if possible."}]}]
  0%|                             | 0/649 [00:04<?, ?it/s]
Traceback (most recent call last):
  File "/VLMEvalKit/run.py", line 226, in <module>
    main()
  File "/VLMEvalKit/run.py", line 140, in main
    model = infer_data_job(
  File "/VLMEvalKit/vlmeval/inference.py", line 164, in infer_data_job
    model = infer_data(
  File "/VLMEvalKit/vlmeval/inference.py", line 129, in infer_data
    response = model.generate(message=struct, dataset=dataset_name)
  File "/VLMEvalKit/vlmeval/vlm/base.py", line 115, in generate
    return self.generate_inner(message, dataset)
  File "/VLMEvalKit/vlmeval/vlm/qwen2_vl/model.py", line 100, in generate_inner
    generated_ids = self.model.generate(
  File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2053, in generate
    result = self._sample(
  File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 3003, in _sample
    outputs = self(**model_inputs, return_dict=True)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = module._old_forward(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1686, in forward
    inputs_embeds = inputs_embeds.masked_scatter(image_mask, image_embeds)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:0! (when checking argument for argument mask in method wrapper_CUDA__masked_scatter_)

已有的issue似乎解决不了我的需求 我的模型够小,因此希望八卡八模型,每个卡推理一部分的文件 这在readme里似乎是 torchrun --nproc-per-node=8 run.py 来实现的

参考了现有 issue#244 但qwen2vl的model.py里 模型load没有问题

 self.model = Qwen2VLForConditionalGeneration.from_pretrained(
            model_path, torch_dtype='auto', device_map='auto', attn_implementation='flash_attention_2'
        ).eval()

与issue 224里类似

self.model = AutoModel.from_pretrained(model_path, torch_dtype=torch.bfloat16,
                                       trust_remote_code=True,
                                       load_in_8bit=load_in_8bit, device_map='auto').eval()
paulpaul91 commented 5 days ago

device_map 自己做下制定,就可以解决啦

kennymckormick commented 3 days ago

@M3Dade , 问题已修复