chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
32.28k stars 5.6k forks source link

[BUG] DuplicateWidgetID: There are multiple widgets with the same key=''. #2986

Closed ryancurry-mz closed 7 months ago

ryancurry-mz commented 9 months ago

问题描述 / Problem Description 第二次对话回答结束后,报错。 参考了其他issue的解决方案,仍未解决。 例如: 1.首次未回复,第二次才报错,我这边两遍都是正常回复。 2.利用cpu启动,我这边是用gpu

请问有人遇到类似的问题吗?请赐教!

环境信息 / Environment Information

附加信息 / Additional Information image

完整报错信息


import sys; print('Python %s on %s' % (sys.version, sys.platform))
/root/anaconda3/bin/conda run -n new-langchain-cc --no-capture-output python /root/.pycharm_helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 50847 --file /data/langchain-Repo/v0.2.8/Langchain-Chatchat/startup.py -a 
已连接到 pydev 调试器(内部版本号 232.9921.89)
==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31.
python版本:3.10.12 (main, Jul  5 2023, 18:54:27) [GCC 11.2.0]
项目版本:v0.2.8
langchain版本:0.0.354. fastchat版本:0.2.33
当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['Qwen1.5-7B-Chat-GPTQ-Int4'] @ cuda
{'device': 'cuda',
 'host': '127.0.0.1',
 'infer_turbo': False,
 'model_path': '/data/Qwen-Repo/Qwen/model/Qwen1.5-7B-Chat-GPTQ-Int4',
 'model_path_exists': True,
 'port': 20002}
当前Embbedings模型: m3e-base-ft @ cuda
==============================Langchain-Chatchat Configuration==============================
2024-02-17 11:51:46,534 - startup.py[line:650] - INFO: 正在启动服务:
2024-02-17 11:51:46,535 - startup.py[line:651] - INFO: 如需查看 llm_api 日志,请前往 /data/langchain-Repo/v0.2.8/Langchain-Chatchat/logs
2024-02-17 11:51:56 | ERROR | stderr | INFO:     Started server process [18107]
2024-02-17 11:51:56 | ERROR | stderr | INFO:     Waiting for application startup.
2024-02-17 11:51:56 | ERROR | stderr | INFO:     Application startup complete.
2024-02-17 11:51:56 | ERROR | stderr | INFO:     Uvicorn running on http://127.0.0.1:20000 (Press CTRL+C to quit)
2024-02-17 11:51:57 | INFO | model_worker | Loading the model ['Qwen1.5-7B-Chat-GPTQ-Int4'] on worker e700ae9d ...
Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards:  50%|█████     | 1/2 [00:00<00:00,  5.77it/s]
Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00,  7.31it/s]
Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00,  7.01it/s]
2024-02-17 11:52:00 | ERROR | stderr | 
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
2024-02-17 11:52:36 | INFO | model_worker | Register to controller
INFO:     Started server process [18191]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:7861 (Press CTRL+C to quit)
==============================Langchain-Chatchat Configuration==============================
操作系统:Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31.
python版本:3.10.12 (main, Jul  5 2023, 18:54:27) [GCC 11.2.0]
项目版本:v0.2.8
langchain版本:0.0.354. fastchat版本:0.2.33
当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['Qwen1.5-7B-Chat-GPTQ-Int4'] @ cuda
{'device': 'cuda',
 'host': '127.0.0.1',
 'infer_turbo': False,
 'model_path': '/data/Qwen-Repo/Qwen/model/Qwen1.5-7B-Chat-GPTQ-Int4',
 'model_path_exists': True,
 'port': 20002}
当前Embbedings模型: m3e-base-ft @ cuda
服务端运行信息:
    OpenAI API Server: http://127.0.0.1:20000/v1
    Chatchat  API  Server: http://127.0.0.1:7861
    Chatchat WEBUI Server: http://127.0.0.1:8501
==============================Langchain-Chatchat Configuration==============================
Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.
  You can now view your Streamlit app in your browser.
  URL: http://127.0.0.1:8501
  A new version of Streamlit is available.
  See what's new at https://discuss.streamlit.io/c/announcements
  Enter the following command to upgrade:
  $ pip install streamlit --upgrade
INFO:     127.0.0.1:35782 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-02-17 11:52:54,944 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-02-17 11:52:54,948 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:35782 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-02-17 11:52:55,072 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-02-17 11:52:55,075 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:35782 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
2024-02-17 11:52:55,090 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
2024-02-17 11:53:56,161 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-02-17 11:53:56,164 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:35794 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
INFO:     127.0.0.1:35794 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-02-17 11:53:56,188 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-02-17 11:53:56,190 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:35794 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
INFO:     127.0.0.1:35794 - "POST /chat/chat HTTP/1.1" 200 OK
2024-02-17 11:53:56,207 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
2024-02-17 11:53:56,519 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK"
2024-02-17 11:53:56 | INFO | stdout | INFO:     127.0.0.1:56230 - "POST /v1/chat/completions HTTP/1.1" 200 OK
2024-02-17 11:53:56,601 - _client.py[line:1729] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/chat/completions "HTTP/1.1 200 OK"
2024-02-17 11:53:56 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK"
2024-02-17 11:54:01,902 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-02-17 11:54:01,905 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:35814 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
2024-02-17 11:54:01,926 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK"
2024-02-17 11:54:01,929 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK"
2024-02-17 11:54:01,943 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK"
INFO:     127.0.0.1:35814 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK
INFO:     127.0.0.1:35814 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK
INFO:     127.0.0.1:35814 - "POST /chat/chat HTTP/1.1" 200 OK
2024-02-17 11:54:02,199 - _client.py[line:1013] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK"
2024-02-17 11:54:02 | INFO | stdout | INFO:     127.0.0.1:56250 - "POST /v1/chat/completions HTTP/1.1" 200 OK
2024-02-17 11:54:02,252 - _client.py[line:1729] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/chat/completions "HTTP/1.1 200 OK"
2024-02-17 11:54:02 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK"
2024-02-17 11:54:07.349 Uncaught app exception
Traceback (most recent call last):
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
    exec(code, module.__dict__)
  File "/data/langchain-Repo/v0.2.8/Langchain-Chatchat/webui.py", line 64, in <module>
    pages[selected_page]["func"](api=api, is_lite=is_lite)
  File "/data/langchain-Repo/v0.2.8/Langchain-Chatchat/webui_pages/dialogue/dialogue.py", line 326, in dialogue_page
    chat_box.show_feedback(**feedback_kwargs,
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit_chatbox/messages.py", line 309, in show_feedback
    return streamlit_feedback(**kwargs)
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit_feedback/__init__.py", line 104, in streamlit_feedback
    component_value = _component_func(
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit/components/v1/components.py", line 80, in __call__
    return self.create_instance(*args, default=default, key=key, **kwargs)
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit/runtime/metrics_util.py", line 396, in wrapped_func
    result = non_optional_func(*args, **kwargs)
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit/components/v1/components.py", line 241, in create_instance
    return_value = marshall_component(dg, element)
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit/components/v1/components.py", line 212, in marshall_component
    component_state = register_widget(
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit_option_menu/streamlit_callback.py", line 20, in wrapper_register_widget
    return register_widget(*args, **kwargs)
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit/runtime/state/widgets.py", line 161, in register_widget
    return register_widget_from_metadata(metadata, ctx, widget_func_name, element_type)
  File "/root/anaconda3/envs/new-langchain-cc/lib/python3.10/site-packages/streamlit/runtime/state/widgets.py", line 194, in register_widget_from_metadata
    raise DuplicateWidgetID(
streamlit.errors.DuplicateWidgetID: There are multiple widgets with the same `key=''`.
To fix this, please make sure that the `key` argument is unique for each
widget you create.
dosubot[bot] commented 9 months ago

🤖

Hey there, @ryancurry-mz! 🚀 Glad to see you back! Ran into another hiccup, huh? Let's see what we can do about it this time.

Based on the information you've provided and the similar issues I found in the Langchain-Chatchat repository, there are a few potential solutions to the "DuplicateWidgetID" error you're encountering.

Firstly, the error message "There are multiple widgets with the same key=''" suggests that there might be multiple Streamlit widgets using the same key, which is not allowed. The key parameter in Streamlit widgets is used to uniquely identify a widget. If the key parameter is not provided, Streamlit will use the widget's label as the key. However, if there are multiple widgets with the same label, Streamlit will throw an error. To avoid this, you can provide a unique key for each widget. You can check the dialogue_page function in the code where the streamlit_feedback function is called, and ensure that each time it is called, the message_id is unique. If the issue persists, further examination of the dialogue_page function and the streamlit_feedback module may be necessary. You can refer to this issue for more details.

Secondly, you might want to ensure that the model names in model_config.py match the actual names used when loading models with fastchat. This was suggested by a user who encountered a similar error and resolved it by aligning the names. You can refer to this issue for more details.

Thirdly, you might want to consider updating your software to a newer version. A user suggested updating to version 0.2.8 or higher, and since you're using version v2.0.8, you might want to consider updating to the latest version, which is v0.2.10.

Lastly, you might want to consider upgrading PyTorch to version 2.2.0. One user reported that this resolved the issue when using the CPU to load local models. You can refer to this issue for more details.

Unfortunately, I wasn't able to find the exact changes made in the Langchain-Chatchat codebase between versions v2.0.8 and v0.2.10 that could potentially resolve the "DuplicateWidgetID" error. However, updating to the latest version might include bug fixes that could potentially resolve your issue.

I hope this helps! If you have any other questions or need further clarification, feel free to ask.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
ryancurry-mz commented 9 months ago

排查后发现是,feedback部分代码的问题。 未解决,但是注释掉这段代码就不报错了。 dialogue.py image

github-actions[bot] commented 8 months ago

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。

wu15936935780 commented 7 months ago

4 兄弟,看了你的解决方案后,又出现这种问题有解决方法吗?

wu15936935780 commented 7 months ago

5 这个是服务端状态

xiaowang-xuexiji commented 4 months ago

这个问题我也遇到了,但是调整了显存之后就解决了,可能是因为显存不够的原因。