Closed das990 closed 4 months ago
The error you're encountering, "Torch not compiled with CUDA enabled," indicates that PyTorch is attempting to use CUDA on a system where it's either not supported or not properly configured. Here's how to address this:
Check for NVIDIA GPU: If your system doesn't have an NVIDIA GPU, ensure your code doesn't attempt to use CUDA by avoiding .to('cuda')
or .cuda()
calls. Instead, use .to('cpu')
to explicitly run on the CPU.
Install PyTorch with CUDA Support: If you have an NVIDIA GPU, ensure you've installed a PyTorch version with CUDA support. You can find the correct version for your setup on the PyTorch official website.
Modify Code for Device Compatibility: Use the following snippet to dynamically assign the device based on availability:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
This ensures your code runs on CUDA if available, or CPU otherwise.
Update PyTorch and Dependencies: Make sure all related libraries are up to date to avoid compatibility issues.
Multiprocessing Environment Setup: For applications using multiprocessing, ensure you're following PyTorch's best practices, particularly in a Windows environment. This includes setting the CUDA_VISIBLE_DEVICES
environment variable appropriately for your use case.
By following these steps, you should be able to resolve the CUDA-related error and ensure your application runs correctly, whether on CPU or GPU.
To continue the conversation, mention @dosu.
这个问题已经被标记为 stale
,因为它已经超过 30 天没有任何活动。
2024-05-08 15:26:35 | ERROR | stderr | 2024-05-08 15:26:35 | ERROR | stderr | Process model_worker - chatglm3-6b: 2024-05-08 15:26:35 | ERROR | stderr | Traceback (most recent call last): 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 314, in _bootstrap 2024-05-08 15:26:35 | ERROR | stderr | self.run() 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 108, in run 2024-05-08 15:26:35 | ERROR | stderr | self._target(self._args, self._kwargs) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 389, in run_model_worker 2024-05-08 15:26:35 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, kwargs) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 217, in create_model_worker_app 2024-05-08 15:26:35 | ERROR | stderr | worker = ModelWorker( 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastchat\serve\model_worker.py", line 77, in init 2024-05-08 15:26:35 | ERROR | stderr | self.model, self.tokenizer = load_model( 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastchat\model\model_adapter.py", line 362, in load_model 2024-05-08 15:26:35 | ERROR | stderr | model.to(device) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\modeling_utils.py", line 2595, in to 2024-05-08 15:26:35 | ERROR | stderr | return super().to(args, **kwargs) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1160, in to 2024-05-08 15:26:35 | ERROR | stderr | return self._apply(convert) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 810, in _apply 2024-05-08 15:26:35 | ERROR | stderr | module._apply(fn) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 810, in _apply 2024-05-08 15:26:35 | ERROR | stderr | module._apply(fn) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 810, in _apply 2024-05-08 15:26:35 | ERROR | stderr | module._apply(fn) 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 833, in _apply 2024-05-08 15:26:35 | ERROR | stderr | param_applied = fn(param) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1158, in convert 2024-05-08 15:26:35 | ERROR | stderr | return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) 2024-05-08 15:26:35 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-05-08 15:26:35 | ERROR | stderr | File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\cuda__init__.py", line 289, in _lazy_init 2024-05-08 15:26:35 | ERROR | stderr | raise AssertionError("Torch not compiled with CUDA enabled") 2024-05-08 15:26:35 | ERROR | stderr | AssertionError: Torch not compiled with CUDA enabled 2024-05-08 15:26:39 | ERROR | stderr | INFO: Shutting down 2024-05-08 15:26:39,349 - startup.py[line:855] - WARNING: Sending SIGKILL to {'zhipu-api':}
2024-05-08 15:26:39,350 - startup.py[line:855] - WARNING: Sending SIGKILL to {'chatglm3-6b': }
2024-05-08 15:26:39,350 - startup.py[line:855] - WARNING: Sending SIGKILL to
2024-05-08 15:26:39,351 - startup.py[line:855] - WARNING: Sending SIGKILL to
2024-05-08 15:26:39,351 - startup.py[line:855] - WARNING: Sending SIGKILL to
Traceback (most recent call last):
File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 767, in start_main_server
e.wait()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\managers.py", line 1097, in wait
return self._callmethod('wait', (timeout,))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\managers.py", line 822, in _callmethod
kind, result = conn.recv()
^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\connection.py", line 249, in recv
buf = self._recv_bytes()
^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\connection.py", line 304, in _recv_bytes
waitres = _winapi.WaitForMultipleObjects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 612, in f
raise KeyboardInterrupt(f"{signalname} received")
KeyboardInterrupt: SIGINT received
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 881, in
loop.run_until_complete(start_main_server())
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 650, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Langchain-Chatchat\startup.py", line 863, in start_main_server
p.kill()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\multiprocessing\process.py", line 140, in kill
self._popen.kill()
^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'kill'
系统信息:
设备名称 啊九 处理器 Intel(R) Core(TM) i7-10700KF CPU @ 3.80GHz 3.79 GHz 机带 RAM 16.0 GB 设备 ID 97124EC6-555F-4099-8F0F-CC877594A570 产品 ID 00328-90000-00000-AAOEM 系统类型 64 位操作系统, 基于 x64 的处理器 笔和触控 没有可用于此显示器的笔或触控输入 Windows 11 企业版