jaketae / storyteller

Multimodal AI Story Teller, built with Stable Diffusion, GPT, and neural text-to-speech
MIT License
482 stars 64 forks source link

AssertionError: Torch not compiled with CUDA enabled #22

Closed merolaika closed 2 months ago

merolaika commented 4 months ago

Describe the bug

storyteller --writer_device cuda WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 1.13.1+cu117 with CUDA 1107 (you have 2.1.2+cpu) Python 3.10.9 (you have 3.10.11) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details Traceback (most recent call last): File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\Scripts\storyteller.exe__main__.py", line 7, in File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\storyteller\cli.py", line 75, in main story_teller = StoryTeller(config) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\storyteller\utils.py", line 23, in wrapper_func func(*args, kwargs) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\storyteller\utils.py", line 37, in wrapper_func func(args, kwargs) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\storyteller\model.py", line 29, in init self.writer = pipeline( File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\pipelines__init__.py", line 1070, in pipeline return pipeline_class(model=model, framework=framework, task=task, kwargs) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\pipelines\text_generation.py", line 70, in init super().init(args, kwargs) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\pipelines\base.py", line 840, in init self.model.to(self.device) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2595, in to return super().to(*args, **kwargs) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1160, in to return self._apply(convert) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply module._apply(fn) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply module._apply(fn) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply param_applied = fn(param) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "C:\Users\mero\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda__init__.py", line 289, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Wed_Nov_22_10:30:42_Pacific_Standard_Time_2023 Cuda compilation tools, release 12.3, V12.3.107 Build cuda_12.3.r12.3/compiler.33567101_0

nvidia-smi Sun Feb 11 11:18:17 2024 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 551.23 Driver Version: 551.23 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4070 WDDM | 00000000:01:00.0 On | N/A | | 0% 49C P0 30W / 215W | 704MiB / 12282MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 1764 C+G ...nt.CBS_cw5n1h2txyewy\SearchHost.exe N/A | | 0 N/A N/A 4080 C+G ...2txyewy\StartMenuExperienceHost.exe N/A | | 0 N/A N/A 5752 C+G ...GeForce Experience\NVIDIA Share.exe N/A | | 0 N/A N/A 16124 C+G ...n\121.0.2277.106\msedgewebview2.exe N/A | | 0 N/A N/A 16416 C+G ...oogle\Chrome\Application\chrome.exe N/A | | 0 N/A N/A 16624 C+G C:\Windows\explorer.exe N/A | | 0 N/A N/A 18292 C+G ...2.0_x64__cv1g1gvanyjgm\WhatsApp.exe N/A | | 0 N/A N/A 18656 C+G ...siveControlPanel\SystemSettings.exe N/A | | 0 N/A N/A 20540 C+G ...ogram Files\Notepad++\notepad++.exe N/A | | 0 N/A N/A 20768 C+G ...US\ArmouryDevice\asus_framework.exe N/A | | 0 N/A N/A 21440 C+G ...5n1h2txyewy\ShellExperienceHost.exe N/A | | 0 N/A N/A 26836 C+G ...__8wekyb3d8bbwe\WindowsTerminal.exe N/A | +-----------------------------------------------------------------------------------------+

Desktop (please complete the following information):

jaketae commented 4 months ago

Hi, thanks for the report.

My guess is that the version of PyTorch you are using does not have CUDA support. Can you double check that you can correctly use CUDA on PyTorch? A simple script like this might do.

import torch

x = torch.randn(3, 3).to("cuda")

You can also check your PyTorch installation by doing something like

pip list | grep torch

You should see that PyTorch is not a CPU-only version.

stale[bot] commented 2 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.