Open ZanePoe opened 2 months ago
hi, pls. check https://github.com/InternLM/lmdeploy/issues/2210
你好 现在我能进行w8a8量化,但是加载模型就会报错
Traceback (most recent call last):
File "/data/liuyuanchao/swift/lmdeploy/dss_quato.py", line 14, in <module>
pipe = pipeline(model_path, chat_template_config=ChatTemplateConfig('llama3'))
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/api.py", line 89, in pipeline
return pipeline_class(model_path,
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/serve/vl_async_engine.py", line 24, in __init__
super().__init__(model_path, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/serve/async_engine.py", line 190, in __init__
self._build_turbomind(model_path=model_path,
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/serve/async_engine.py", line 235, in _build_turbomind
self.engine = tm.TurboMind.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/turbomind.py", line 340, in from_pretrained
return cls(model_path=pretrained_model_name_or_path,
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/turbomind.py", line 144, in __init__
self.model_comm = self._from_hf(model_source=model_source,
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/turbomind.py", line 235, in _from_hf
output_model = OUTPUT_MODELS.get(output_model_name)(
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/target_model/w4.py", line 80, in __init__
super().__init__(input_model, cfg, to_file, out_dir)
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 172, in __init__
self.cfg = self.get_config(cfg)
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/target_model/w4.py", line 92, in get_config
w1s, _, _ = bin.ffn_scale(i)
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/source_model/llama_awq.py", line 52, in ffn_scale
return ensure_fp16orint32(self._ffn(i, 'scales'))
File "/usr/local/lib/python3.8/dist-packages/lmdeploy/turbomind/deploy/source_model/llama.py", line 103, in _ffn
tensor = self.params[
KeyError: 'llm.model.layers.0.mlp.gate_proj.scales'
w8a8 is only supported by pytorch engine. Pls. set backend_config
to PytorchEngineConfig
when using pipeline
pipe = pipeline(model_path,
backend_config=PytorchEngineConfig(tp=1,
session_len=4096,
max_prefill_token_num=4096,
cache_max_entry_count=0.5),
)
报错了
raise ValueError(
ValueError: The model's quantization config from the arguments has no quant_method
attribute. Make sure that the model has been correctly quantized
@lyc728 hi, sorry for misunderstanding. auto_awq
is for w4a16 and smooth_quant
is for w8a8.
In your case, you are using w4a16, which is only supported by Turbomind engine and it needs to input model_format='awq'
.
This is how to use in pipeline: https://lmdeploy.readthedocs.io/en/latest/quantization/w4a16.html#inference
from lmdeploy import pipeline, TurbomindEngineConfig
engine_config = TurbomindEngineConfig(model_format='awq')
pipe = pipeline("./internlm2_5-7b-chat-4bit", backend_config=engine_config)
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
你好 现在我能进行w8a8量化,但是加载模型就会报错
你好 我是参考这个文档进行的量化
https://github.com/InternLM/lmdeploy/blob/main/docs/zh_cn/quantization/w8a8.md
lmdeploy lite smooth_quant internlm/internlm-chat-7b --work-dir ./internlm-chat-7b-w8
现在是无法加载模型进行推理,如果把参数设置成model_format='awq' 这个应该是w4的把 跟我转模型不符合
根据这个表格,glm4是不支持w8a8(pytorch engine),但支持w4a16(turbomind engine). https://lmdeploy.readthedocs.io/en/latest/supported_models/supported_models.html#models-supported-by-pytorch https://lmdeploy.readthedocs.io/en/latest/supported_models/supported_models.html#models-supported-by-turbomind
我是intervl2和minicpm2.5
好的
Checklist
Describe the bug
命令
lmdeploy lite auto_awq THUDM/glm-4-9b-chat --work-dir ./models/glm-4-9b-chat-int4 --search-scale True --batch-size 8
报错。 但是命令lmdeploy lite auto_awq THUDM/glm-4-9b-chat --work-dir ./models/glm-4-9b-chat-int4
就没有问题,正常量化和推理。Reproduction
lmdeploy lite auto_awq THUDM/glm-4-9b-chat --work-dir ./models/glm-4-9b-chat-int4 --search-scale True --batch-size 8
Environment
Error traceback