WuNein / vllm4mteb

vLLM for embedding tasks using Original LLMs (Qwen2, LLaMA)
MIT License
22 stars 1 forks source link

最新版VLLM 报错 AttributeError: 'LlamaForCausalLM' object has no attribute 'pooler' #3

Closed PLUTO-SCY closed 2 months ago

PLUTO-SCY commented 2 months ago

非常感谢你的工作!Great work!

但是我在使用最新版的VLLM取Meta-Llama-3-8B-Instruct的embedding的时候还是出了点小问题,完整信息如下:

INFO 09-09 21:28:20 model_runner.py:915] Starting to load model /data2/shaochenyang/scywork/VLLM/Models/Meta-Llama-3-8B-Instruct... Loading safetensors checkpoint shards: 0% Completed | 0/4 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 25% Completed | 1/4 [00:01<00:04, 1.36s/it] Loading safetensors checkpoint shards: 50% Completed | 2/4 [00:02<00:02, 1.43s/it] Loading safetensors checkpoint shards: 75% Completed | 3/4 [00:03<00:01, 1.05s/it] Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:04<00:00, 1.24s/it] Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:04<00:00, 1.24s/it]

INFO 09-09 21:28:26 model_runner.py:926] Loading model weights took 14.9595 GB Processed prompts: 0%| | 0/4 00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s: Traceback (most recent call last): rank0: File "/data2/shaochenyang/scywork/VLLM/example2monkey.py", line 144, in rank0: outputs = model.encode(prompts)

rank0: File "/data2/shaochenyang/.local/lib/python3.11/site-packages/vllm/utils.py", line 1032, in inner rank0: return fn(*args, **kwargs)

rank0: File "/data2/shaochenyang/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 560, in encode rank0: outputs = self._run_engine(use_tqdm=use_tqdm)

rank0: File "/data2/shaochenyang/.local/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 704, in _run_engine rank0: step_outputs = self.llm_engine.step()

rank0: File "/data2/shaochenyang/.local/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 1551, in step rank0: output = self.model_executor.execute_model(

rank0: File "/data2/shaochenyang/.local/lib/python3.11/site-packages/vllm/executor/gpu_executor.py", line 130, in execute_model rank0: output = self.driver_worker.execute_model(execute_model_req)

rank0: File "/data2/shaochenyang/.local/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 327, in execute_model rank0: output = self.model_runner.execute_model(

rank0: File "/data2/shaochenyang/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context rank0: return func(*args, **kwargs)

rank0: File "/data2/shaochenyang/.local/lib/python3.11/site-packages/vllm/worker/embedding_model_runner.py", line 122, in execute_model

rank0: File "/data2/shaochenyang/.local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1729, in getattr rank0: raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")

Processed prompts: 0%|

似乎是VLLM的版本问题?不知道可否麻烦您看下呢~

WuNein commented 2 months ago

我知道了,你这步没做,config.json要改名字的!不改名字不会认为是我们自己添加的Embedding模型!

  "_name_or_path": "princeton-nlp/Sheared-LLaMA-1.3B",
  "architectures": [
    "LlamaForCausalLM"
  ],
  "bos_token_id": 1,
  "eos_token_id": 2,
  "hidden_act": "silu",
  "hidden_size": 2048,
  "initializer_range": 0.02,
  "intermediate_size": 5504,
  "max_position_embeddings": 4096,
  "model_type": "llama",
  "num_attention_heads": 16,
  "num_hidden_layers": 24,
  "num_key_value_heads": 16,
  "pad_token_id": 0,
  "pretraining_tp": 1,
  "rms_norm_eps": 1e-05,
  "rope_scaling": null,
  "tie_word_embeddings": false,
  "torch_dtype": "float32",
  "transformers_version": "4.28.1",
  "use_cache": true,
  "vocab_size": 32000
}

修改model arth的名字 (Change architectures name to)

MyLlamaEmbeddingModel

记得Star!

WuNein commented 2 months ago

顺便给了个Qwen2的例子~

PLUTO-SCY commented 2 months ago

感谢!完美解决了我的问题,确实是没有改名。已star。Have a nice day!

WuNein commented 2 months ago

对了,你有性能上benchmark数据吗,能分享一下吗?最好跑的数据量大一点?