Traceback (most recent call last):
File "/opt/conda/bin/lmdeploy", line 8, in <module>
sys.exit(run())
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/cli/entrypoint.py", line 37, in run
args.run(args)
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/cli/serve.py", line 283, in api_server
run_api_server(args.model_path,
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/serve/openai/api_server.py", line 1191, in serve
VariableInterface.async_engine = pipeline_class(
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/serve/async_engine.py", line 206, in __init__
self._build_turbomind(model_path=model_path,
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/serve/async_engine.py", line 254, in _build_turbomind
self.engine = tm.TurboMind.from_pretrained(
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/turbomind/turbomind.py", line 396, in from_pretrained
return cls(model_path=pretrained_model_name_or_path,
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/turbomind/turbomind.py", line 170, in __init__
self.model_comm = self._from_hf(model_source=model_source,
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/turbomind/turbomind.py", line 305, in _from_hf
output_model.export()
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 273, in export
self.export_transformer_block(bin, i)
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/turbomind/deploy/target_model/w4.py", line 156, in export_transformer_block
self.save_split(w2_sz, f'layers.{i}.feed_forward.w2.scales_zeros', 0)
File "/opt/conda/lib/python3.8/site-packages/lmdeploy/turbomind/deploy/target_model/base.py", line 246, in save_split
assert tensor.shape[split_dim] % tp == 0
AssertionError
Checklist
Describe the bug
lmdeploy awq量化后不能多卡部署 报错:assert tensor.shape[split_dim] % tp == 0
Reproduction
awq量化: lmdeploy lite auto_awq \ model_path \ --calib-dataset 'c4' \ --calib-samples 128 \ --work-dir xxx \ 使用上述命令进行awq量化后,使用下面的命令进行多卡部署 CUDA_VISIBLE_DEVICES=6,7 lmdeploy serve api_server xxx --server-name 0.0.0.0 --server-port 8006 --tp 2 报错:assert tensor.shape[split_dim] % tp == 0
Environment
Error traceback