chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.84k stars 5.55k forks source link

[BUG] 简洁阐述问题 / 使用命令 python startup.py -a启动时出错 #3967

Closed das990 closed 4 months ago

das990 commented 5 months ago

2024-05-08 15:26:35 | ERROR | stderr | 2024-05-08 15:26:35 | ERROR | stderr | Process model_worker - chatglm3-6b: 2024-05-08 15:26:35 | ERROR | stderr | Traceback (most recent call last): 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 314, in _bootstrap 2024-05-08 15:26:35 | ERROR | stderr | self.run() 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 108, in run 2024-05-08 15:26:35 | ERROR | stderr | self._target(self._args, self._kwargs) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 389, in run_model_worker 2024-05-08 15:26:35 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, kwargs) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 217, in create_model_worker_app 2024-05-08 15:26:35 | ERROR | stderr | worker = ModelWorker( 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastchat\serve\model_worker.py", line 77, in init 2024-05-08 15:26:35 | ERROR | stderr | self.model, self.tokenizer = load_model( 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastchat\model\model_adapter.py", line 362, in load_model 2024-05-08 15:26:35 | ERROR | stderr | model.to(device) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\modeling_utils.py", line 2595, in to 2024-05-08 15:26:35 | ERROR | stderr | return super().to(args, **kwargs) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1160, in to 2024-05-08 15:26:35 | ERROR | stderr | return self._apply(convert) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 810, in _apply 2024-05-08 15:26:35 | ERROR | stderr | module._apply(fn) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 810, in _apply 2024-05-08 15:26:35 | ERROR | stderr | module._apply(fn) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 810, in _apply 2024-05-08 15:26:35 | ERROR | stderr | module._apply(fn) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 833, in _apply 2024-05-08 15:26:35 | ERROR | stderr | param_applied = fn(param) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1158, in convert 2024-05-08 15:26:35 | ERROR | stderr | return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\cuda__init__.py", line 289, in _lazy_init 2024-05-08 15:26:35 | ERROR | stderr | raise AssertionError("Torch not compiled with CUDA enabled") 2024-05-08 15:26:35 | ERROR | stderr | AssertionError: Torch not compiled with CUDA enabled 2024-05-08 15:26:39 | ERROR | stderr | INFO: Shutting down 2024-05-08 15:26:39,349 - startup.py[line:855] - WARNING: Sending SIGKILL to {'zhipu-api': } 2024-05-08 15:26:39,350 - startup.py[line:855] - WARNING: Sending SIGKILL to {'chatglm3-6b': } 2024-05-08 15:26:39,350 - startup.py[line:855] - WARNING: Sending SIGKILL to 2024-05-08 15:26:39,351 - startup.py[line:855] - WARNING: Sending SIGKILL to 2024-05-08 15:26:39,351 - startup.py[line:855] - WARNING: Sending SIGKILL to Traceback (most recent call last): File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 767, in start_main_server e.wait() File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\managers.py", line 1097, in wait return self._callmethod('wait', (timeout,)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\managers.py", line 822, in _callmethod kind, result = conn.recv() ^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\connection.py", line 249, in recv buf = self._recv_bytes() ^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\connection.py", line 304, in _recv_bytes waitres = _winapi.WaitForMultipleObjects( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 612, in f raise KeyboardInterrupt(f"{signalname} received") KeyboardInterrupt: SIGINT received

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 881, in loop.run_until_complete(start_main_server()) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 650, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 863, in start_main_server p.kill() File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 140, in kill self._popen.kill() ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'kill'

系统信息:

设备名称 啊九 处理器 Intel(R) Core(TM) i7-10700KF CPU @ 3.80GHz 3.79 GHz 机带 RAM 16.0 GB 设备 ID 97124EC6-555F-4099-8F0F-CC877594A570 产品 ID 00328-90000-00000-AAOEM 系统类型 64 位操作系统, 基于 x64 的处理器 笔和触控 没有可用于此显示器的笔或触控输入 Windows 11 企业版

dosubot[bot] commented 5 months ago

The error you're encountering, "Torch not compiled with CUDA enabled," indicates that PyTorch is attempting to use CUDA on a system where it's either not supported or not properly configured. Here's how to address this:

  1. Check for NVIDIA GPU: If your system doesn't have an NVIDIA GPU, ensure your code doesn't attempt to use CUDA by avoiding .to('cuda') or .cuda() calls. Instead, use .to('cpu') to explicitly run on the CPU.

  2. Install PyTorch with CUDA Support: If you have an NVIDIA GPU, ensure you've installed a PyTorch version with CUDA support. You can find the correct version for your setup on the PyTorch official website.

  3. Modify Code for Device Compatibility: Use the following snippet to dynamically assign the device based on availability:

    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model.to(device)

    This ensures your code runs on CUDA if available, or CPU otherwise.

  4. Update PyTorch and Dependencies: Make sure all related libraries are up to date to avoid compatibility issues.

  5. Multiprocessing Environment Setup: For applications using multiprocessing, ensure you're following PyTorch's best practices, particularly in a Windows environment. This includes setting the CUDA_VISIBLE_DEVICES environment variable appropriately for your use case.

By following these steps, you should be able to resolve the CUDA-related error and ensure your application runs correctly, whether on CPU or GPU.

#### Details **Sources** - [设置成cpu时依然调用了显卡。](https://github.com/chatchat-space/Langchain-Chatchat/discussions/1804) - [我想使用cuda去跑,但是报错了](https://github.com/chatchat-space/Langchain-Chatchat/issues/3484) - [[BUG] 启动时报错](https://github.com/chatchat-space/Langchain-Chatchat/issues/3617) - [M1Pro芯片跑项目卡住](https://github.com/chatchat-space/Langchain-Chatchat/issues/3689) - [Mac M3,运行报错:AssertionError: Torch not compiled with CUDA enabled](https://github.com/chatchat-space/Langchain-Chatchat/issues/2990) - [startup.py](https://github.com/chatchat-space/Langchain-Chatchat/blob/master/startup.py) - [requirements.txt](https://github.com/chatchat-space/Langchain-Chatchat/blob/master/requirements.txt) - [Dockerfile](https://github.com/chatchat-space/Langchain-Chatchat/blob/master/Dockerfile) - [requirements_api.txt](https://github.com/chatchat-space/Langchain-Chatchat/blob/master/requirements_api.txt)
**About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

github-actions[bot] commented 4 months ago

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。