suno-ai / bark

🔊 Text-Prompted Generative Audio Model
MIT License
33.72k stars 4k forks source link

Is my GPU ram causing problem? I only have 8GB #281

Open TanvirHafiz opened 1 year ago

TanvirHafiz commented 1 year ago

Here is the error I get, I am no programmer, but it seems it cannot run it with 8GB VRAM. (might be wrng about it) anyway, here is the error i get from the console

Traceback (most recent call last): File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\routes.py", line 412, in run_predict output = await app.get_blocks().process_api( File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\blocks.py", line 1299, in process_api result = await self.call_function( File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\blocks.py", line 1021, in call_function prediction = await anyio.to_thread.run_sync( File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "G:\Bark\Bark_WebUI\bark\UI.py", line 24, in start audio_array = generate_audio(prompt, history_prompt=npz_names[voice]) File "G:\Bark\Bark_WebUI\bark\bark\api.py", line 107, in generate_audio semantic_tokens = text_to_semantic( File "G:\Bark\Bark_WebUI\bark\bark\api.py", line 25, in text_to_semantic x_semantic = generate_text_semantic( File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 428, in generate_text_semantic preload_models() File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 362, in preloadmodels = load_model( File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 310, in load_model model = _load_model_f(ckpt_path, device) File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 275, in _load_model model.to(device) File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1145, in to return self._apply(convert) File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply module._apply(fn) [Previous line repeated 2 more times] File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply param_applied = fn(param) File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 8.00 GiB total capacity; 7.30 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

JonathanFly commented 1 year ago

You need to set the OFFLOAD options to true. If you're having trouble, edit generation.py here, add this:

image

TanvirHafiz commented 1 year ago

nope, same problem, same error. i have a 3060ti 8gb VRAM and an eith gen core i7 with 32gb ram

gkucsko commented 1 year ago

maybe something else is hogging your gpu memory? try something like nvidia-smi in a terminal or use python: https://stackoverflow.com/questions/58216000/get-total-amount-of-free-gpu-memory-and-available-using-pytorch

TanvirHafiz commented 1 year ago

does it specifically require 8GB or more. in that case me having only 8GB might just be breaking point of it. i think i should have invested in a 3060 with 12GB RAM

C0untFloyd commented 1 year ago

does it specifically require 8GB or more. in that case me having only 8GB might just be breaking point of it. i think i should have invested in a 3060 with 12GB RAM

It runs with some limitations even with very small VRAM like ~ 2 Gb and you can even toggle it to not use your GPU at all...but you have to know how to apply these switches.

Your real problem might be this: File "G:\Bark\Bark_WebUI\installer_files\

The original Bark here has no WebUI or installer, which means you probably installed an outdated fork from somewhere. My best guess would be this one here https://github.com/Fictiverse/bark. From that page: grafik

However that branch wasn't updated for weeks and seems to be stale, so I'd suggest some other GUI Branches like these:

Just pick your flavour...

fnrcum commented 1 year ago

Can you post your code snippet and OS? i've seen issues with setting the env variables at a python level. I had the same issue and when setting the env vars at a python level but when i tried at console/system/pycharm level, it got fixed. My GPU is 8GB too https://github.com/suno-ai/bark/issues/315#issuecomment-1568127751

asterocean commented 4 months ago

I figured out a solution, load modules on demand rather than load them all at the same time, try this pull request: https://github.com/suno-ai/bark/pull/531

TanvirHafiz commented 4 months ago

Thanks. I will try that!

On Sun, Feb 25, 2024 at 1:44 AM asterocean @.***> wrote:

I figured out a solution, load modules on demand rather than load them all at the same time, try this pull request: #531 https://github.com/suno-ai/bark/pull/531

— Reply to this email directly, view it on GitHub https://github.com/suno-ai/bark/issues/281#issuecomment-1962640330, or unsubscribe https://github.com/notifications/unsubscribe-auth/A3IB2HSH3SR2VG3HI76TAW3YVI7JTAVCNFSM6AAAAAAX4327KSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRSGY2DAMZTGA . You are receiving this because you authored the thread.Message ID: @.***>

-- Tanvir Hafiz CEO Prohelica