-
Works great, but had to edit the demo.launch() line and add server_name="0.0.0.0" to get it working in the server.
propose adopting the "--listen" argument as is used for other gradio interfaces.
…
-
You make this a PR
-
I want to know about max text_prompt length supported by model
and best practice or method to divide the big text into chunks to trained on this model
-
1/ When downloading dependencies, several errors appear :
Read prefs: C:\Users\shqmf\AppData\Roaming\Blender Foundation\Blender\3.5\config\userpref.blend
Blender PIP user site: C:\Users\shqmf\AppD…
-
``` py
File ~/Code/Miniforge3/lib/python3.9/site-packages/bark/api.py:66, in semantic_to_waveform(semantic_tokens, history_prompt, temp, silent, output_full)
54 coarse_tokens = generate_coarse(…
-
I'm completely deaf, so I was hoping that this would give me a way to see what the heck VRchat players are talking about in front of mirrors.
My computer specs:
CPU - Ryzen 5 3600XT
GPU - 3070Ti
…
-
Hi,
I managed to "fix" all the others errors I got. But I cannot find a way to fix the one on the screenshot. Any idea?
Input:
"from bark import SAMPLE_RATE, generate_audio, preload_models
f…
-
Is there a way to run on arbitrarily long text for example breaking up by max token (not splitting words)?
-
Hi,
Great project. It is exactly what the Autonomous Agent space is lacking, to get rid of the dependency to OpenAI or other commercial AI providers. Based on my own research (I wanted to build som…
-
Trying this on an Nvidia GPU, the 1650 Super to be exact. The entire installation process seemed to have gone fine, I selected Nvidia when asked. No issues.
However, at the end I got this error:
…