JonathanFly / bark

🚀 BARK INFINITY GUI CMD 🎶 Powered Up Bark Text-prompted Generative Audio Model
MIT License
997 stars 92 forks source link

RuntimeError: Unrecognized CachingAllocator option: garbage_collection_threshold #17

Open aolko opened 1 year ago

aolko commented 1 year ago

While attempting to run the script as written in readme i get

Traceback (most recent call last):
  File "D:\tmp\bark\bark-inf\bark_perform.py", line 3, in <module>
    from bark import SAMPLE_RATE, generate_audio, preload_models
  File "D:\tmp\bark\bark-inf\bark\__init__.py", line 1, in <module>
    from .api import generate_audio, text_to_semantic, semantic_to_waveform
  File "D:\tmp\bark\bark-inf\bark\api.py", line 3, in <module>
    from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic
  File "D:\tmp\bark\bark-inf\bark\generation.py", line 24, in <module>
    torch.cuda.is_bf16_supported()
  File "D:\Python310\lib\site-packages\torch\cuda\__init__.py", line 92, in is_bf16_supported
    return torch.cuda.get_device_properties(torch.cuda.current_device()).major >= 8 and cuda_maj_decide
  File "D:\Python310\lib\site-packages\torch\cuda\__init__.py", line 481, in current_device
    _lazy_init()
  File "D:\Python310\lib\site-packages\torch\cuda\__init__.py", line 216, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unrecognized CachingAllocator option: garbage_collection_threshold

I have gtx 1060 with 6gb vram, windows 10 and python 3.10.6 oh and PYTORCH_CUDA_ALLOC_CONF is garbage_collection_threshold:0.6,max_split_size_mb:128

JonathanFly commented 1 year ago

While attempting to run the script as written in readme i get

I have gtx 1060 with 6gb vram, windows 10 and python 3.10.6 oh and PYTORCH_CUDA_ALLOC_CONF is garbage_collection_threshold:0.6,max_split_size_mb:128

You should be good to go even with the large models in the latest version.

aolko commented 1 year ago

⚠️ Not solved Same error on latest version

Traceback (most recent call last):
  File "D:\tmp\bark\bark-inf\bark_perform.py", line 6, in <module>
    from bark_infinity import config
  File "D:\tmp\bark\bark-inf\bark_infinity\__init__.py", line 1, in <module>
    from .api import generate_audio, text_to_semantic, semantic_to_waveform, save_as_prompt
  File "D:\tmp\bark\bark-inf\bark_infinity\api.py", line 4, in <module>
    from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic, SAMPLE_RATE
  File "D:\tmp\bark\bark-inf\bark_infinity\generation.py", line 32, in <module>
    torch.cuda.is_bf16_supported()
  File "D:\Python310\lib\site-packages\torch\cuda\__init__.py", line 92, in is_bf16_supported
    return torch.cuda.get_device_properties(torch.cuda.current_device()).major >= 8 and cuda_maj_decide
  File "D:\Python310\lib\site-packages\torch\cuda\__init__.py", line 481, in current_device
    _lazy_init()
  File "D:\Python310\lib\site-packages\torch\cuda\__init__.py", line 216, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unrecognized CachingAllocator option: garbage_collection_threshold
JonathanFly commented 1 year ago

⚠️ Not solved Same error on latest version

Traceback (most recent call last):
  File "D:\tmp\bark\bark-inf\bark_perform.py", line 6, in <module>
    from bark_infinity import config
  File "D:\tmp\bark\bark-inf\bark_infinity\__init__.py", line 1, in <module>
    from .api import generate_audio, text_to_semantic, semantic_to_waveform, save_as_prompt
  File "D:\tmp\bark\bark-inf\bark_infinity\api.py", line 4, in <module>
    from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic, SAMPLE_RATE
  File "D:\tmp\bark\bark-inf\bark_infinity\generation.py", line 32, in <module>
    torch.cuda.is_bf16_supported()
  File "D:\Python310\lib\site-packages\torch\cuda\__init__.py", line 92, in is_bf16_supported
    return torch.cuda.get_device_properties(torch.cuda.current_device()).major >= 8 and cuda_maj_decide
  File "D:\Python310\lib\site-packages\torch\cuda\__init__.py", line 481, in current_device
    _lazy_init()
  File "D:\Python310\lib\site-packages\torch\cuda\__init__.py", line 216, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unrecognized CachingAllocator option: garbage_collection_threshold

I think this is related to your version of Pytorch not being new enough. Can you try updating it, or alternatively, try the full mamba/conda install steps in the readme (it installs Pytorch 2.0 specifically.) If you do the Readme steps it won't mess up anything else on your system using Pytorch, it will install it in a separate environment.

aolko commented 1 year ago

I think this is related to your version of Pytorch not being new enough. Can you try updating it, or alternatively, try the full mamba/conda install steps in the readme (it installs Pytorch 2.0 specifically.)

True, i was running old pytorch, updated to 2.0

If you do the Readme steps it won't mess up anything else on your system using Pytorch, it will install it in a separate environment.

Can you perhaps do virtualenv?

Also, your .bat file is nailed into conda env

JonathanFly commented 1 year ago

I'll check out venv. It's probably super easy to put an install for that in the readme, I just happened to use conda myself for everything to keep it simple, so I've barely used venv (or poetry, for that matter.)