chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.47k stars 5.49k forks source link

{"detail":"Not Found"},除了api文档之外的网页都打不开,用cpu和cuda都不行 #3485

Closed Sapphire025 closed 5 months ago

Sapphire025 commented 6 months ago

==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-5.4.0-167-generic-x86_64-with-glibc2.31. python版本:3.11.8 (main, Feb 26 2024, 21:39:34) [GCC 11.2.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b-128k', 'zhipu-api', 'openai-api'] @ cpu {'device': 'cpu', 'host': '0.0.0.0', 'infer_turbo': False, 'model_path': 'chatglm3-6b-128k', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'cpu', 'host': '0.0.0.0', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'glm-4', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'cpu', 'host': '0.0.0.0', 'infer_turbo': False, 'model_name': 'gpt-4', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh @ cpu ==============================Langchain-Chatchat Configuration==============================

2024-03-22 18:34:53,490 - startup.py[line:655] - INFO: 正在启动服务: 2024-03-22 18:34:53,490 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /data1/Langchain-Chatchat/logs /home/hello/anaconda3/envs/langchain/lib/python3.11/site-packages/torch/cuda/init.py:138: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11080). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 /home/hello/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支0.2.x中相关功能将废弃 warn_deprecated( /home/hello/anaconda3/envs/langchain/lib/python3.11/site-packages/torch/cuda/init.py:138: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11080). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 /home/hello/anaconda3/envs/langchain/lib/python3.11/site-packages/torch/cuda/init.py:138: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11080). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 /home/hello/anaconda3/envs/langchain/lib/python3.11/site-packages/torch/cuda/init.py:138: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11080). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 2024-03-22 18:35:00 | INFO | model_worker | Register to controller 2024-03-22 18:35:00 | ERROR | stderr | INFO: Started server process [3418857] 2024-03-22 18:35:00 | ERROR | stderr | INFO: Waiting for application startup. 2024-03-22 18:35:00 | ERROR | stderr | INFO: Application startup complete. 2024-03-22 18:35:00 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20010 (Press CTRL+C to quit) 2024-03-22 18:35:01 | INFO | model_worker | Loading the model ['chatglm3-6b-128k'] on worker b0be9972 ... Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] 2024-03-22 18:35:02 | ERROR | stderr | /home/hello/anaconda3/envs/langchain/lib/python3.11/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() 2024-03-22 18:35:02 | ERROR | stderr | return self.fget.get(instance, owner)() Loading checkpoint shards: 14%|█████████████████▎ | 1/7 [00:14<01:29, 14.90s/it] Loading checkpoint shards: 29%|██████████████████████████████████▌ | 2/7 [00:31<01:18, 15.77s/it] Loading checkpoint shards: 43%|███████████████████████████████████████████████████▊ | 3/7 [00:47<01:03, 15.89s/it] Loading checkpoint shards: 57%|█████████████████████████████████████████████████████████████████████▏ | 4/7 [01:02<00:46, 15.56s/it] Loading checkpoint shards: 71%|██████████████████████████████████████████████████████████████████████████████████████▍ | 5/7 [01:17<00:30, 15.24s/it] Loading checkpoint shards: 86%|███████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 6/7 [01:25<00:12, 12.90s/it] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [01:32<00:00, 11.10s/it] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [01:32<00:00, 13.26s/it] 2024-03-22 18:36:35 | ERROR | stderr | 2024-03-22 18:36:35 | INFO | model_worker | Register to controller /home/hello/anaconda3/envs/langchain/lib/python3.11/site-packages/torch/cuda/init.py:138: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11080). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 INFO: Started server process [3423615] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:7861 (Press CTRL+C to quit) /home/hello/anaconda3/envs/langchain/lib/python3.11/site-packages/torch/cuda/init.py:138: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11080). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0

==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-5.4.0-167-generic-x86_64-with-glibc2.31. python版本:3.11.8 (main, Feb 26 2024, 21:39:34) [GCC 11.2.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b-128k', 'zhipu-api', 'openai-api'] @ cpu {'device': 'cpu', 'host': '0.0.0.0', 'infer_turbo': False, 'model_path': 'chatglm3-6b-128k', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'cpu', 'host': '0.0.0.0', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'glm-4', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'cpu', 'host': '0.0.0.0', 'infer_turbo': False, 'model_name': 'gpt-4', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh @ cpu

服务端运行信息: OpenAI API Server: http://127.0.0.1:20010/v1 Chatchat API Server: http://127.0.0.1:7861 Chatchat WEBUI Server: http://0.0.0.0:8501 ==============================Langchain-Chatchat Configuration==============================

  👋 Welcome to Streamlit!

  If you’d like to receive helpful onboarding emails, news, offers, promotions,
  and the occasional swag, please enter your email address below. Otherwise,
  leave this field blank.

  Email:  INFO:     127.0.0.1:48044 - "GET / HTTP/1.1" 307 Temporary Redirect

INFO: 127.0.0.1:48044 - "GET /docs HTTP/1.1" 200 OK INFO: 127.0.0.1:48044 - "GET /static-offline-docs/swagger-ui.css HTTP/1.1" 304 Not Modified INFO: 127.0.0.1:48058 - "GET /static-offline-docs/swagger-ui-bundle.js HTTP/1.1" 304 Not Modified INFO: 127.0.0.1:48058 - "GET /openapi.json HTTP/1.1" 200 OK 2024-03-22 18:37:56 | INFO | stdout | INFO: 127.0.0.1:40086 - "GET /v1 HTTP/1.1" 404 Not Found 2024-03-22 18:37:56 | INFO | stdout | INFO: 127.0.0.1:40086 - "GET /favicon.ico HTTP/1.1" 404 Not Found

dosubot[bot] commented 6 months ago

Hey @Sapphire025, great to see you back! Running into a bit of a snag, I see? Let's dive into it.

Based on the information you've provided, there are a couple of potential issues that could be causing the problems you're experiencing with the Langchain-Chatchat v0.2.10.

  1. 404 Not Found Errors for Web Pages: The 404 errors you're seeing suggest that the routes for these pages may not have been correctly defined or added to the FastAPI application, or that the static files for these pages are not correctly placed in the expected directory or are not being correctly served by the application.

    To resolve this, you should ensure that all necessary routes are defined in your FastAPI application and that static files are correctly placed in the expected directories and properly referenced in your application code. You can refer to the FastAPI documentation on Routing and Static Files for more information on how to do this.

  2. CUDA Initialization Error: The CUDA initialization error could be due to incompatible versions of CUDA and PyTorch or other libraries, insufficient GPU resources or incorrect GPU device selection, or issues with the CUDA installation on your system.

    To resolve this, you should ensure that the CUDA version installed on your system is compatible with the versions of PyTorch and other GPU-accelerated libraries you are using. You can use torch.cuda.is_available() to check if CUDA is available and torch.cuda.get_device_name(0) to see if PyTorch can access the GPU. If the issue persists, consider reinstalling CUDA and ensuring that all environment variables (e.g., CUDA_HOME, PATH) are correctly set.

    The error message you're seeing suggests that your NVIDIA driver may be too old. You can update your GPU driver by downloading and installing a new version from the NVIDIA website. Alternatively, you can go to the PyTorch website to install a PyTorch version that has been compiled with your version of the CUDA driver.

Without more detailed context about how the web pages are served and how CUDA is initialized and used in your application, these suggestions are based on common issues related to the described problems. If you continue to experience issues, please provide more detailed information about your setup and the exact steps you're taking that lead to these errors.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
github-actions[bot] commented 5 months ago

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。