chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.08k stars 5.42k forks source link

lora训练合并后的模型无法像上个版本一样使用 #1238

Closed zdj-1995 closed 11 months ago

zdj-1995 commented 1 year ago

问题描述 使用的是通义千问7B模型,使用LLaMA-Efficient-Tuning训练合并后千问7B模型无法使用,报错

  1. 执行 python server/llm_api.py

(langchain666) ubuntu@demosvr:~/Langchain-Chatchat-master$ python server/llm_api.py 2023-08-25 14:46:19,293 - llm_api.py[line:231] - INFO: {'local_model_path': '/home/ubuntu/LLaMA-Efficient-Tuning-main/output', 'api_base_url': 'http://192.168.0.10:8888/v1', 'api_key': 'EMPTY'} 2023-08-25 14:46:19,293 - llm_api.py[line:234] - INFO: 如需查看 llm_api 日志,请前往 /home/ubuntu/Langchain-Chatchat-master/logs 2023-08-25 14:46:19 | ERROR | stderr | INFO: Started server process [1045533] 2023-08-25 14:46:19 | ERROR | stderr | INFO: Waiting for application startup. INFO: Started server process [1045535] INFO: Waiting for application startup. 2023-08-25 14:46:20 | ERROR | stderr | INFO: Application startup complete. 2023-08-25 14:46:20 | ERROR | stderr | INFO: Uvicorn running on http://192.168.0.10:20001 (Press CTRL+C to quit) 2023-08-25 14:46:20,030 - instantiator.py[line:21] - INFO: Created a temporary directory at /tmp/tmpnhiidu_5 2023-08-25 14:46:20,030 - instantiator.py[line:76] - INFO: Writing /tmp/tmpnhiidu_5/_remote_module_non_scriptable.py 2023-08-25 14:46:20 | INFO | model_worker | Loading the model ['qwen-7B-Chat'] on worker 8833a9b1 ... 2023-08-25 14:46:20 | INFO | stdout | Loading /home/ubuntu/LLaMA-Efficient-Tuning-main/output requires to execute some code in that repo, you can inspect the content of the repository at https://hf.co//home/ubuntu/LLaMA-Efficient-Tuning-main/output. You can dismiss this prompt by passing trust_remote_code=True. 2023-08-25 14:46:20 | INFO | stdout | Do you accept? [y/N] 2023-08-25 14:46:20 | ERROR | stderr | Process model_worker(1045274): 2023-08-25 14:46:20 | ERROR | stderr | Traceback (most recent call last): 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/anaconda3/envs/langchain666/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap 2023-08-25 14:46:20 | ERROR | stderr | self.run() 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/anaconda3/envs/langchain666/lib/python3.10/multiprocessing/process.py", line 108, in run 2023-08-25 14:46:20 | ERROR | stderr | self._target(*self._args, *self._kwargs) 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/Langchain-Chatchat-master/server/llm_api.py", line 194, in run_model_worker 2023-08-25 14:46:20 | ERROR | stderr | app = create_model_worker_app(args, **kwargs) 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/Langchain-Chatchat-master/server/llm_api.py", line 128, in create_model_worker_app 2023-08-25 14:46:20 | ERROR | stderr | worker = ModelWorker( 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/anaconda3/envs/langchain666/lib/python3.10/site-packages/fastchat/serve/model_worker.py", line 207, in init 2023-08-25 14:46:20 | ERROR | stderr | self.model, self.tokenizer = load_model( 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/anaconda3/envs/langchain666/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 268, in load_model 2023-08-25 14:46:20 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs) 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/anaconda3/envs/langchain666/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 72, in load_model 2023-08-25 14:46:20 | ERROR | stderr | model = AutoModelForCausalLM.from_pretrained( 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/anaconda3/envs/langchain666/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 461, in from_pretrained 2023-08-25 14:46:20 | ERROR | stderr | config, kwargs = AutoConfig.from_pretrained( 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/anaconda3/envs/langchain666/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 986, in from_pretrained 2023-08-25 14:46:20 | ERROR | stderr | trust_remote_code = resolve_trust_remote_code( 2023-08-25 14:46:20 | ERROR | stderr | File "/home/ubuntu/anaconda3/envs/langchain666/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 538, in resolve_trust_remote_code 2023-08-25 14:46:20 | ERROR | stderr | answer = input( 2023-08-25 14:46:20 | ERROR | stderr | EOFError: EOF when reading a line

zdj-1995 commented 1 year ago

其他工程正常可加载,目前我们这个版本就无法加载,可以看看是哪里有冲突嘛

github-actions[bot] commented 11 months ago

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。

zRzRzRzRzRzRzR commented 11 months ago

请按照最新的代码peft加载,如果还有问题,可以重新提出issue

yaospacetim commented 10 months ago

代码的改动感觉有点不知所措了,很多莫名的问题产生,无法加载或者回答为空,乱回答的情况都存在,请问有没有针对微调模型,特别是 LoRA 模型加载以及相关配置的详细解决方案,如果我们自己修不好,只能先暂时放弃了。

ryancurry-mz commented 10 months ago

代码的改动感觉有点不知所措了,很多莫名的问题产生,无法加载或者回答为空,乱回答的情况都存在,请问有没有针对微调模型,特别是 LoRA 模型加载以及相关配置的详细解决方案,如果我们自己修不好,只能先暂时放弃了。

我改完后跑不起来了,奇奇怪怪的错误。用的是本地的模型。 image

yaospacetim commented 10 months ago

代码的改动感觉有点不知所措了,很多莫名的问题产生,无法加载或者回答为空,乱回答的情况都存在,请问有没有针对微调模型,特别是 LoRA 模型加载以及相关配置的详细解决方案,如果我们自己修不好,只能先暂时放弃了。

我改完后跑不起来了,奇奇怪怪的错误。用的是本地的模型。 image

这个问题好解决,就是把文件夹改成官方一样的名称就可以,我说的那些问题才奇怪,不知道哪里出了问题

TC10127 commented 3 months ago

代码的框架感觉有点不知所措了,很多名的问题出现,无法加载或者回答为空,乱回答的情况都存在,请问有没有针对角色模型,特别是LoRA模型加载以及相关配置的详细解决方案,如果我们自己修不好,就只能先暂时放弃了。

我改完后跑不起来了,奇怪的错误。用的是本地的模型。 图像

你好,我跟您遇到了一样的问题,请问你解决了嘛🙏