DataXujing / TensorRT-LLM-ChatGLM3

:fire: 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM
Apache License 2.0
26 stars 1 forks source link

ChatGLM3-6b大模型下载到本地,运行python offline_chatglm3.py 报错如下,如何解决? #3

Open hanyong-max opened 7 months ago

hanyong-max commented 7 months ago

Traceback (most recent call last): File "/root/anaconda3/lib/python3.11/site-packages/vllm/transformers_utils/config.py", line 30, in get_confi g config = AutoConfig.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 111 4, in from_pretrained trust_remote_code = resolve_trust_remote_code( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/lib/python3.11/site-packages/transformers/dynamic_module_utils.py", line 621, in resol ve_trust_remote_code raise ValueError( ValueError: Loading /home/hanyong/TensorRT-LLM-ChatGLM3-main/vLLM/ChatGLM3-6b requires you to execute the conf iguration file in that repo on your local machine. Make sure you have read the code there to avoid malicious u se, then set the option trust_remote_code=True to remove this error.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/hanyong/TensorRT-LLM-ChatGLM3-main/vLLM/offline_chatglm3.py", line 12, in llm = LLM(model="/home/hanyong/TensorRT-LLM-ChatGLM3-main/vLLM/ChatGLM3-6b") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/lib/python3.11/site-packages/vllm/entrypoints/llm.py", line 109, in init self.llm_engine = LLMEngine.from_engine_args(engine_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 386, in from_engine_args engine_configs = engine_args.create_engine_configs() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 287, in create_engine_con figs model_config = ModelConfig( ^^^^^^^^^^^^ File "/root/anaconda3/lib/python3.11/site-packages/vllm/config.py", line 111, in init self.hf_config = get_config(self.model, trust_remote_code, revision, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/lib/python3.11/site-packages/vllm/transformers_utils/config.py", line 43, in get_confi g raise RuntimeError(err_msg) from e RuntimeError: Failed to load the model config. If the model is a custom model not yet available in the Hugging Face transformers library, consider setting trust_remote_code=True in LLM or using the --trust-remote-code flag in the CLI.

DataXujing commented 7 months ago

按照提示增加trust_remote_code=True尝试一下看是否可以解决