chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
32.3k stars 5.6k forks source link

[BUG] 用CPU运行LLM,出现报错There are multiple widgets with the same key=' '. #2377

Closed zhongzhubailong closed 10 months ago

zhongzhubailong commented 11 months ago

将configs文件夹里的model_config.py,修改为 LLM_DEVICE = "cpu" 即用CPU运行LLM后,运行会出现以下报错:

DuplicateWidgetID: There are multiple widgets with the same key=''.

To fix this, please make sure that the key argument is unique for each widget you create.

Traceback: File "D:\Langchain-Chatchat\webui.py", line 64, in pages[selected_page]["func"](api=api, is_lite=is_lite) File "D:\Langchain-Chatchat\webui_pages\dialogue\dialogue.py", line 326, in dialogue_page chat_box.show_feedback(feedback_kwargs, File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_chatbox\messages.py", line 309, in show_feedback return streamlit_feedback(kwargs) File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_feedback__init__.py", line 104, in streamlit_feedback component_value = _component_func( File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_option_menu\streamlit_callback.py", line 20, in wrapper_register_widget return register_widget(*args, **kwargs)

2

liunux4odoo commented 11 months ago

贴出完整 log

zhongzhubailong commented 11 months ago

贴出完整 log

INFO: 127.0.0.1:54227 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2023-12-15 20:58:18,535 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" received input message: {'history': [{'content': '你好', 'role': 'user'}, {'content': '', 'role': 'assistant'}], 'max_tokens': None, 'model_name': 'Qwen-7B-Chat-Int4', 'prompt_name': 'default', 'query': '在吗', 'stream': True, 'temperature': 0.7} INFO: 127.0.0.1:54227 - "POST /chat/chat HTTP/1.1" 200 OK 2023-12-15 20:58:18,545 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK" 2023-12-15 20:58:18 | INFO | stdout | INFO: 127.0.0.1:54230 - "POST /v1/chat/completions HTTP/1.1" 200 OK 2023-12-15 20:58:18,578 - util.py[line:67] - INFO: message='OpenAI API response' path=http://127.0.0.1:20000/v1/chat/completions processing_ms=None request_id=None response_code=200 2023-12-15 20:58:18 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK" 2023-12-15 20:58:18,896 - utils.py[line:25] - ERROR: KeyError: Caught exception: 'choices' 2023-12-15 20:58:18.898 Uncaught app exception Traceback (most recent call last): File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 534, in _run_script exec(code, module.dict) File "D:\Langchain-Chatchat\webui.py", line 64, in pages[selected_page]["func"](api=api, is_lite=is_lite) File "D:\Langchain-Chatchat\webui_pages\dialogue\dialogue.py", line 223, in dialogue_page chat_box.show_feedback(feedback_kwargs, File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_chatbox\messages.py", line 309, in show_feedback return streamlit_feedback(kwargs) File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_feedback__init.py", line 104, in streamlit_feedback component_value = _component_func( File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\components\v1\components.py", line 80, in call__ return self.create_instance(*args, default=default, key=key, kwargs) File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\metrics_util.py", line 396, in wrapped_func result = non_optional_func(*args, *kwargs) File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\components\v1\components.py", line 241, in create_instance return_value = marshall_component(dg, element) File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\components\v1\components.py", line 212, in marshall_component component_state = register_widget( File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_option_menu\streamlit_callback.py", line 20, in wrapper_register_widget return register_widget(args, kwargs) File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\state\widgets.py", line 161, in register_widget return register_widget_from_metadata(metadata, ctx, widget_func_name, element_type) File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\state\widgets.py", line 194, in register_widget_from_metadata raise DuplicateWidgetID( streamlit.errors.DuplicateWidgetID: There are multiple widgets with the same key=''.

To fix this, please make sure that the key argument is unique for each widget you create.

liunux4odoo commented 11 months ago

你重启服务器,然后把从头开始到最后出错的log都贴上来

zhongzhubailong commented 11 months ago

你重启服务器,然后把从头开始到最后出错的log都贴上来

D:\Langchain-Chatchat>echo "Start VENV"
"Start VENV"

D:\Langchain-Chatchat>call Miniconda3\Scripts\activate.bat

(base) D:\Langchain-Chatchat>goto :run

(base) D:\Langchain-Chatchat>echo "Start API + WebUI"
"Start API + WebUI"

(base) D:\Langchain-Chatchat>python startup.py -a

==============================Langchain-Chatchat Configuration==============================
操作系统:Windows-10-10.0.22631-SP0.
python版本:3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
项目版本:v0.2.7
langchain版本:0.0.340. fastchat版本:0.2.33

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['Qwen-7B-Chat-Int4'] @ cpu
{'device': 'cpu',
 'host': '127.0.0.1',
 'infer_turbo': False,
 'model_path': 'D:\\Langchain-Chatchat\\models\\LLM\\Qwen-7B-Chat-Int4',
 'port': 20002}
当前Embbedings模型: m3e-base @ cpu
==============================Langchain-Chatchat Configuration==============================

2023-12-16 09:22:12,987 - startup.py[line:649] - INFO: 正在启动服务:
2023-12-16 09:22:12,987 - startup.py[line:650] - INFO: 如需查看 llm_api 日志,请前往 D:\Langchain-Chatchat\logs
2023-12-16 09:22:15 | ERROR | stderr | INFO:     Started server process [7296]
2023-12-16 09:22:15 | ERROR | stderr | INFO:     Waiting for application startup.
2023-12-16 09:22:15 | ERROR | stderr | INFO:     Application startup complete.
2023-12-16 09:22:15 | ERROR | stderr | INFO:     Uvicorn running on http://127.0.0.1:20000 (Press CTRL+C to quit)
2023-12-16 09:24:05 | INFO | model_worker | Loading the model ['Qwen-7B-Chat-Int4'] on worker 42a1fa8c ...
Using `disable_exllama` is deprecated and will be removed in version 4.37. Use `use_exllama` instead and specify the version with `exllama_config`.The value of `use_exllama` will be overwritten by `disable_exllama` passed in `GPTQConfig` or stored in your config file.
2023-12-16 09:24:12 | WARNING | transformers_modules.Qwen-7B-Chat-Int4.modeling_qwen | Try importing flash-attention for faster inference...
2023-12-16 09:24:12 | WARNING | transformers_modules.Qwen-7B-Chat-Int4.modeling_qwen | Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary
2023-12-16 09:24:12 | WARNING | transformers_modules.Qwen-7B-Chat-Int4.modeling_qwen | Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm
2023-12-16 09:24:12 | WARNING | transformers_modules.Qwen-7B-Chat-Int4.modeling_qwen | Warning: import flash_attn fail, please install FlashAttention to get higher efficiency https://github.com/Dao-AILab/flash-attention
Loading checkpoint shards:   0%|                                                                 | 0/3 [00:00<?, ?it/s]
Loading checkpoint shards:  33%|███████████████████                                      | 1/3 [00:01<00:02,  1.49s/it]
Loading checkpoint shards:  67%|██████████████████████████████████████                   | 2/3 [00:01<00:00,  1.25it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 3/3 [00:03<00:00,  1.12s/it]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 3/3 [00:03<00:00,  1.10s/it]
2023-12-16 09:24:16 | ERROR | stderr |
2023-12-16 09:24:17 | INFO | model_worker | Register to controller
INFO:     Started server process [6524]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:7861 (Press CTRL+C to quit)

==============================Langchain-Chatchat Configuration==============================
操作系统:Windows-10-10.0.22631-SP0.
python版本:3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
项目版本:v0.2.7
langchain版本:0.0.340. fastchat版本:0.2.33

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['Qwen-7B-Chat-Int4'] @ cpu
{'device': 'cpu',
 'host': '127.0.0.1',
 'infer_turbo': False,
 'model_path': 'D:\\Langchain-Chatchat\\models\\LLM\\Qwen-7B-Chat-Int4',
 'port': 20002}
当前Embbedings模型: m3e-base @ cpu

服务端运行信息:
    OpenAI API Server: http://127.0.0.1:20000/v1
    Chatchat  API  Server: http://127.0.0.1:7861
    Chatchat WEBUI Server: http://127.0.0.1:8501
==============================Langchain-Chatchat Configuration==============================

  You can now view your Streamlit app in your browser.

  URL: http://127.0.0.1:8501

{'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
{'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
2023-12-16 09:24:33,726 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54000 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2023-12-16 09:24:33,728 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
{'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
2023-12-16 09:24:33,870 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54000 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2023-12-16 09:24:33,873 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54000 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2023-12-16 09:24:33,875 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
{'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
2023-12-16 09:24:33,893 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54000 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2023-12-16 09:24:33,895 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
{'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
{'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
2023-12-16 09:25:50,949 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54327 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2023-12-16 09:25:50,952 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54327 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2023-12-16 09:25:50,956 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
{'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
2023-12-16 09:25:50,974 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54327 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2023-12-16 09:25:50,976 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
received input message:
{'history': [],
 'max_tokens': None,
 'model_name': 'Qwen-7B-Chat-Int4',
 'prompt_name': 'default',
 'query': '你好',
 'stream': True,
 'temperature': 0.7}
INFO:     127.0.0.1:54327 - "POST /chat/chat HTTP/1.1" 200 OK
2023-12-16 09:25:50,991 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK"
2023-12-16 09:25:51 | INFO | stdout | INFO:     127.0.0.1:54331 - "POST /v1/chat/completions HTTP/1.1" 200 OK
2023-12-16 09:25:51,128 - util.py[line:67] - INFO: message='OpenAI API response' path=http://127.0.0.1:20000/v1/chat/completions processing_ms=None request_id=None response_code=200
2023-12-16 09:25:51 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK"
2023-12-16 09:25:51,907 - utils.py[line:25] - ERROR: KeyError: Caught exception: 'choices'
{'base_url': 'http://127.0.0.1:7861', 'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
{'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
2023-12-16 09:25:55,551 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54350 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2023-12-16 09:25:55,551 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54350 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2023-12-16 09:25:55,551 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
{'timeout': 300.0, 'proxies': {'all://127.0.0.1': None, 'all://localhost': None, 'http://127.0.0.1': None, 'http://': None, 'https://': None, 'all://': None, 'http://localhost': None}}
2023-12-16 09:25:55,569 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:54350 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2023-12-16 09:25:55,572 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
received input message:
{'history': [{'content': '你好', 'role': 'user'},
             {'content': '', 'role': 'assistant'}],
 'max_tokens': None,
 'model_name': 'Qwen-7B-Chat-Int4',
 'prompt_name': 'default',
 'query': '?',
 'stream': True,
 'temperature': 0.7}
INFO:     127.0.0.1:54350 - "POST /chat/chat HTTP/1.1" 200 OK
2023-12-16 09:25:55,582 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK"
2023-12-16 09:25:55 | INFO | stdout | INFO:     127.0.0.1:54353 - "POST /v1/chat/completions HTTP/1.1" 200 OK
2023-12-16 09:25:55,662 - util.py[line:67] - INFO: message='OpenAI API response' path=http://127.0.0.1:20000/v1/chat/completions processing_ms=None request_id=None response_code=200
2023-12-16 09:25:55 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK"
2023-12-16 09:25:55,983 - utils.py[line:25] - ERROR: KeyError: Caught exception: 'choices'
2023-12-16 09:25:55.984 Uncaught app exception
Traceback (most recent call last):
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 534, in _run_script
    exec(code, module.__dict__)
  File "D:\Langchain-Chatchat\webui.py", line 64, in <module>
    pages[selected_page]["func"](api=api, is_lite=is_lite)
  File "D:\Langchain-Chatchat\webui_pages\dialogue\dialogue.py", line 223, in dialogue_page
    chat_box.show_feedback(**feedback_kwargs,
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_chatbox\messages.py", line 309, in show_feedback
    return streamlit_feedback(**kwargs)
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_feedback\__init__.py", line 104, in streamlit_feedback
    component_value = _component_func(
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\components\v1\components.py", line 80, in __call__
    return self.create_instance(*args, default=default, key=key, **kwargs)
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\metrics_util.py", line 396, in wrapped_func
    result = non_optional_func(*args, **kwargs)
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\components\v1\components.py", line 241, in create_instance
    return_value = marshall_component(dg, element)
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\components\v1\components.py", line 212, in marshall_component
    component_state = register_widget(
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_option_menu\streamlit_callback.py", line 20, in wrapper_register_widget
    return register_widget(*args, **kwargs)
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\state\widgets.py", line 161, in register_widget
    return register_widget_from_metadata(metadata, ctx, widget_func_name, element_type)
  File "D:\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\state\widgets.py", line 194, in register_widget_from_metadata
    raise DuplicateWidgetID(
streamlit.errors.DuplicateWidgetID: There are multiple widgets with the same `key=''`.

To fix this, please make sure that the `key` argument is unique for each
widget you create.
nanhui1122 commented 11 months ago

我遇到了相同的报错,不过我是先运行fastchat启动api,然后用chatchat的在线api使用fastchat的模型,最后发现fastchat载入模型时起的名字与model_config设置的模型的名字不一样,修改名称相同后可以使用了

liunux4odoo commented 11 months ago
{'history': [{'content': '你好', 'role': 'user'},
             {'content': '', 'role': 'assistant'}],

这里为什么会穿了一个空内容的历史消息?

你更新到0.2.8试一下吧。

zhongzhubailong commented 11 months ago
{'history': [{'content': '你好', 'role': 'user'},
             {'content': '', 'role': 'assistant'}],

这里为什么会穿了一个空内容的历史消息?

你更新到0.2.8试一下吧。

“你好”是第一次发的消息,发完之后的回答是空白的;第二次再发就是报错了。 更新到0.2.9也是一样的报错。

D:\BaiduNetdiskDownload\Langchain-Chatchat>echo "Start VENV" "Start VENV"

D:\BaiduNetdiskDownload\Langchain-Chatchat>call Miniconda3\Scripts\activate.bat

(base) D:\BaiduNetdiskDownload\Langchain-Chatchat>goto :run

(base) D:\BaiduNetdiskDownload\Langchain-Chatchat>echo "Start API + WebUI" "Start API + WebUI"

(base) D:\BaiduNetdiskDownload\Langchain-Chatchat>python startup.py -a

==============================Langchain-Chatchat Configuration============================== 操作系统:Windows-10-10.0.22631-SP0. python版本:3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] 项目版本:v0.2.9-preview langchain版本:0.0.344. fastchat版本:0.2.33

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['Qwen-1_8B-Chat'] @ cpu {'device': 'cpu', 'host': '127.0.0.1', 'infer_turbo': False, 'model_path': 'D:\BaiduNetdiskDownload\Langchain-Chatchat\models\Qwen\Qwen-1_8B-Chat', 'model_path_exists': True, 'port': 20002} 当前Embbedings模型: m3e-base @ cuda ==============================Langchain-Chatchat Configuration==============================

2023-12-17 17:42:23,007 - startup.py[line:651] - INFO: 正在启动服务: 2023-12-17 17:42:23,007 - startup.py[line:652] - INFO: 如需查看 llm_api 日志,请前往 D:\BaiduNetdiskDownload\Langchain-Chatchat\logs 2023-12-17 17:42:31 | ERROR | stderr | INFO: Started server process [18164] 2023-12-17 17:42:31 | ERROR | stderr | INFO: Waiting for application startup. 2023-12-17 17:42:31 | ERROR | stderr | INFO: Application startup complete. 2023-12-17 17:42:31 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:20000 (Press CTRL+C to quit) 2023-12-17 17:42:34 | INFO | model_worker | Loading the model ['Qwen-1_8B-Chat'] on worker f97a1542 ... 2023-12-17 17:42:34 | WARNING | transformers_modules.Qwen-1_8B-Chat.modeling_qwen | Try importing flash-attention for faster inference... 2023-12-17 17:42:34 | WARNING | transformers_modules.Qwen-1_8B-Chat.modeling_qwen | Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary 2023-12-17 17:42:34 | WARNING | transformers_modules.Qwen-1_8B-Chat.modeling_qwen | Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm 2023-12-17 17:42:34 | WARNING | transformers_modules.Qwen-1_8B-Chat.modeling_qwen | Warning: import flash_attn fail, please install FlashAttention to get higher efficiency https://github.com/Dao-AILab/flash-attention Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|████████████████████████████▌ | 1/2 [00:03<00:03, 3.19s/it] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.84s/it] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.89s/it] 2023-12-17 17:42:41 | ERROR | stderr | 2023-12-17 17:42:41 | INFO | model_worker | Register to controller INFO: Started server process [10236] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:7861 (Press CTRL+C to quit)

==============================Langchain-Chatchat Configuration============================== 操作系统:Windows-10-10.0.22631-SP0. python版本:3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] 项目版本:v0.2.9-preview langchain版本:0.0.344. fastchat版本:0.2.33

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['Qwen-1_8B-Chat'] @ cpu {'device': 'cpu', 'host': '127.0.0.1', 'infer_turbo': False, 'model_path': 'D:\BaiduNetdiskDownload\Langchain-Chatchat\models\Qwen\Qwen-1_8B-Chat', 'model_path_exists': True, 'port': 20002} 当前Embbedings模型: m3e-base @ cuda

服务端运行信息: OpenAI API Server: http://127.0.0.1:20000/v1 Chatchat API Server: http://127.0.0.1:7861 Chatchat WEBUI Server: http://127.0.0.1:8501 ==============================Langchain-Chatchat Configuration==============================

You can now view your Streamlit app in your browser.

URL: http://127.0.0.1:8501

2023-12-17 17:43:09,440 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56720 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2023-12-17 17:43:09,443 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2023-12-17 17:43:09,561 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56720 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2023-12-17 17:43:09,565 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56720 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2023-12-17 17:43:09,586 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" 2023-12-17 17:43:32,852 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56764 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2023-12-17 17:43:32,855 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2023-12-17 17:43:32,870 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56764 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2023-12-17 17:43:32,872 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56764 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2023-12-17 17:43:32,890 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56764 - "POST /chat/chat HTTP/1.1" 200 OK 2023-12-17 17:43:32,901 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK" 2023-12-17 17:43:33 | INFO | stdout | INFO: 127.0.0.1:56767 - "POST /v1/chat/completions HTTP/1.1" 200 OK 2023-12-17 17:43:33,046 - _client.py[line:1729] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/chat/completions "HTTP/1.1 200 OK" 2023-12-17 17:43:33 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK" 2023-12-17 17:43:33,493 - utils.py[line:24] - ERROR: object of type 'NoneType' has no len() Traceback (most recent call last): File "D:\BaiduNetdiskDownload\Langchain-Chatchat\server\utils.py", line 22, in wrap_done await fn File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chains\base.py", line 381, in acall raise e File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chains\base.py", line 375, in acall await self._acall(inputs, run_manager=run_manager) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chains\llm.py", line 275, in _acall response = await self.agenerate([inputs], run_manager=run_manager) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chains\llm.py", line 142, in agenerate return await self.llm.agenerate_prompt( File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 501, in agenerate_prompt return await self.agenerate( File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 461, in agenerate raise exceptions[0] File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 564, in _agenerate_with_cache return await self._agenerate( File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chat_models\openai.py", line 506, in _agenerate return await agenerate_from_stream(stream_iter) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 81, in agenerate_from_stream async for chunk in stream: File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chat_models\openai.py", line 477, in _astream if len(chunk["choices"]) == 0: TypeError: object of type 'NoneType' has no len() 2023-12-17 17:43:33,525 - utils.py[line:27] - ERROR: TypeError: Caught exception: object of type 'NoneType' has no len() 2023-12-17 17:43:50,266 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56833 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2023-12-17 17:43:50,269 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2023-12-17 17:43:50,279 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56833 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2023-12-17 17:43:50,285 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56833 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2023-12-17 17:43:50,305 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:56833 - "POST /chat/chat HTTP/1.1" 200 OK 2023-12-17 17:43:50,318 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK" 2023-12-17 17:43:50 | INFO | stdout | INFO: 127.0.0.1:56836 - "POST /v1/chat/completions HTTP/1.1" 200 OK 2023-12-17 17:43:50,379 - _client.py[line:1729] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/chat/completions "HTTP/1.1 200 OK" 2023-12-17 17:43:50 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK" 2023-12-17 17:43:50,399 - utils.py[line:24] - ERROR: object of type 'NoneType' has no len() Traceback (most recent call last): File "D:\BaiduNetdiskDownload\Langchain-Chatchat\server\utils.py", line 22, in wrap_done await fn File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chains\base.py", line 381, in acall raise e File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chains\base.py", line 375, in acall await self._acall(inputs, run_manager=run_manager) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chains\llm.py", line 275, in _acall response = await self.agenerate([inputs], run_manager=run_manager) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chains\llm.py", line 142, in agenerate return await self.llm.agenerate_prompt( File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 501, in agenerate_prompt return await self.agenerate( File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 461, in agenerate raise exceptions[0] File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 564, in _agenerate_with_cache return await self._agenerate( File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chat_models\openai.py", line 506, in _agenerate return await agenerate_from_stream(stream_iter) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 81, in agenerate_from_stream async for chunk in stream: File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\langchain\chat_models\openai.py", line 477, in _astream if len(chunk["choices"]) == 0: TypeError: object of type 'NoneType' has no len() 2023-12-17 17:43:50,403 - utils.py[line:27] - ERROR: TypeError: Caught exception: object of type 'NoneType' has no len() 2023-12-17 17:43:50.404 Uncaught app exception Traceback (most recent call last): File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 534, in _run_script exec(code, module.dict) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\webui.py", line 64, in pages[selected_page]["func"](api=api, is_lite=is_lite) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\webui_pages\dialogue\dialogue.py", line 326, in dialogue_page chat_box.show_feedback(feedback_kwargs, File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_chatbox\messages.py", line 309, in show_feedback return streamlit_feedback(kwargs) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_feedback__init.py", line 104, in streamlit_feedback component_value = _component_func( File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\components\v1\components.py", line 80, in call__ return self.create_instance(*args, default=default, key=key, kwargs) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\metrics_util.py", line 396, in wrapped_func result = non_optional_func(*args, *kwargs) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\components\v1\components.py", line 241, in create_instance return_value = marshall_component(dg, element) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\components\v1\components.py", line 212, in marshall_component component_state = register_widget( File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit_option_menu\streamlit_callback.py", line 20, in wrapper_register_widget return register_widget(args, kwargs) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\state\widgets.py", line 161, in register_widget return register_widget_from_metadata(metadata, ctx, widget_func_name, element_type) File "D:\BaiduNetdiskDownload\Langchain-Chatchat\Miniconda3\lib\site-packages\streamlit\runtime\state\widgets.py", line 194, in register_widget_from_metadata raise DuplicateWidgetID( streamlit.errors.DuplicateWidgetID: There are multiple widgets with the same key=''.

To fix this, please make sure that the key argument is unique for each widget you create.

a136214808 commented 11 months ago

你或许可以删一下log里面的日志文件

hq221 commented 11 months ago

一样的问题,你解决了吗

zhongzhubailong commented 11 months ago

一样的问题,你解决了吗

没有,issue里面很多人都是这个报错,所以还是不要用cpu运行

lcdisme commented 11 months ago

一样的问题,我的使用在线api运行,第一个问题没有回显,报错TypeError: object of type 'NoneType' has no len(),第二个问题会报There are multiple widgets with the same key=''.

zRzRzRzRzRzRzR commented 11 months ago

现在没有了吧

Liwan-Chen commented 10 months ago

现在没有了吧

请问是怎么解决的,遇到了同样的问题。 image

zRzRzRzRzRzRzR commented 10 months ago

只要你第一个没有回答,第二个就会必有这个错误,必须等第一个正确回答(有内容)才行

Liwan-Chen commented 10 months ago

只要你第一个没有回答,第二个就会必有这个错误,必须等第一个正确回答(有内容)请问可以贴上对话实例?这也太奇怪了

zRzRzRzRzRzRzR commented 10 months ago
image

这里第一个对话模型回答的没有内容啊,你直接问第二个就会出发streamlit的机制错误

zhongzhubailong commented 10 months ago

现在没有了吧

请问是怎么解决的,遇到了同样的问题。 image

没有解决,反正不用CPU运行本地大模型就行

zhongzhubailong commented 10 months ago

现在没有了吧

没有解决,反正不用CPU运行本地大模型就行

Liwan-Chen commented 10 months ago

现在没有了吧

没有解决,反正不用CPU运行本地大模型 是的 用CPU运行本地大模型不行。 AMD 支持在本地run大模型吗?

zhongzhubailong commented 10 months ago

现在没有了吧

没有解决,反正不用CPU运行本地大模型 是的 用CPU运行本地大模型不行。 AMD 支持在本地run大模型吗?

AMD我不太清楚,我是英伟达。可以用API接入

brealisty commented 10 months ago

同样使用cpu加载本地模型,升级torch到2.2.0版本之后。问题解决了。

changk521 commented 7 months ago

我也有这个问题, 我是在线接入qwen-api,我没运行本地模型, 第一个没回答看后台有错误,然后第二次就报同样的问题了。

BANGzys commented 6 months ago

我也有这个问题, 我是在线接入qwen-api,我没运行本地模型, 第一个没回答看后台有错误,然后第二次就报同样的问题了。

一样的问题,请问解决了吗

Hotlat6077 commented 5 months ago

torch到2.2.0

跟你一样的问题,不知道怎么解决?卡这里了