Closed pinsystem closed 1 year ago
Tips:
pip install simpleaudio
from bark import SAMPLE_RATE, generate_audio, preload_models
# download and load all models
preload_models()
# generate audio from text
text_prompt = """
Hello, my name is Suno. And, uh — and I like pizza. [laughs]
But I also have other interests such as playing tic tac toe.
"""
audio_array = generate_audio(text_prompt, history_prompt="en_speaker_1") # history_prompt selects the speaker
import simpleaudio as sa
num_channels = 1
bytes_per_sample = 4 # float32
sample_rate = SAMPLE_RATE
play_obj = sa.play_buffer(audio_array, num_channels, bytes_per_sample, SAMPLE_RATE)
# Wait for playback to finish before exiting
play_obj.wait_done()
you could try manually download the models with reconnection enabled: https://github.com/suno-ai/bark/issues/46#issuecomment-1519055764
@BSalita unfortunately pip install simpleaudio
leads me another error :(
@gkucsko now it works thank you!!
here's the whole error
C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\torchaudio\backend\utils.py:74: UserWarning: No audio backend is available. warnings.warn("No audio backend is available.") No GPU being used. Careful, inference might be extremely slow! found outdated text model, removing. 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.35G/5.35G [07:48<00:00, 11.4MiB/s] Downloading (…)solve/main/vocab.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 996k/996k [00:00<00:00, 11.4MB/s] Downloading (…)okenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 29.0/29.0 [00:00<00:00, 7.28kB/s] Downloading (…)lve/main/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 625/625 [00:00<00:00, 209kB/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [02:19<00:00, 1.39s/it] No GPU being used. Careful, inference might be extremely slow! 69%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 2.73G/3.93G [03:58<01:45, 11.4MiB/s] Traceback (most recent call last): File "C:\Users\Administrator\Desktop\bark_test.py", line 9, in <module> audio_array = generate_audio(text_prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\api.py", line 113, in generate_audio out = semantic_to_waveform( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\api.py", line 54, in semantic_to_waveform coarse_tokens = generate_coarse( ^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\generation.py", line 580, in generate_coarse model = load_model(use_gpu=use_gpu, model_type="coarse") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\generation.py", line 296, in load_model model = _load_model_f(ckpt_path, device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\generation.py", line 233, in _load_model _download(model_info["path"], ckpt_path) File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\bark\generation.py", line 165, in _download raise ValueError("ERROR, something went wrong") ValueError: ERROR, something went wrong
maybe it's something about connection timeout? if so, how can I increase it?