jaketae / storyteller

Multimodal AI Story Teller, built with Stable Diffusion, GPT, and neural text-to-speech
MIT License
482 stars 64 forks source link

CUDA out of memory NVIDIA 2060 6G #16

Closed cy99 closed 1 year ago

cy99 commented 1 year ago

Traceback (most recent call last): File "C:\Python39\lib\runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Python39\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Python39\Scripts\storyteller.exe__main__.py", line 7, in File "C:\Python39\lib\site-packages\storyteller\cli.py", line 39, in main story_teller = StoryTeller(config) File "C:\Python39\lib\site-packages\storyteller\utils.py", line 22, in wrapper_func func(*args, kwargs) File "C:\Python39\lib\site-packages\storyteller\utils.py", line 36, in wrapper_func func(*args, *kwargs) File "C:\Python39\lib\site-packages\storyteller\model.py", line 31, in init self.painter = StableDiffusionPipeline.from_pretrained( File "C:\Python39\lib\site-packages\diffusers\pipeline_utils.py", line 270, in to module.to(torch_device) File "C:\Python39\lib\site-packages\transformers\modeling_utils.py", line 1749, in to return super().to(args, kwargs) File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 852, in to return self._apply(convert) File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply module._apply(fn) File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply module._apply(fn) File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply module._apply(fn) [Previous line repeated 3 more times] File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 552, in _apply param_applied = fn(param) File "C:\Python39\lib\site-packages\torch\nn\modules\module.py", line 850, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.23 GiB already allocated; 0 bytes free; 5.30 GiB reserved in total by PyTorch)

jaketae commented 1 year ago

Hello @cy99, it seems like the machine you're on doesn't have enough VRAM. Could you try modifying the config so that you place some models on the CPU instead of everything on the GPU? For instance,

storyteller --writer_device "cpu"
storyteller --painter_device "cpu"
stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.