-
I don't think LowVRAM is quite cutting it for me.
i have a 12700K and a spare 1050 2GB GPU.
Is it possible to run the models entirely (XTTS 2.0.3) on either my 1050 or my CPU?
And while you're he…
-
Hello!
I'm trying to build this, but I've entered into a dependency hell as I'm in Arch linux, so my python version is the latest one, and then, requirement's dependencies are not installing
```
…
-
### Describe the bug
Here is the bug:
```
PS E:\AI> tts --model_name "tts_models/multilingual/multi-dataset/xtts_v1" --text "Ceci est un teste de voix." --language_idx "fr" --use_cuda False
> tt…
-
Hello,
Now I am using TTS v2 for my tts server.
But this keeps to cut the whole text, because of the error of 250 token limits.
![image](https://github.com/coqui-ai/TTS/assets/83673245/3a69f782-37c…
-
Hi, this isn't as such an issue but I wanted to know How did you decide on the parameter values for fine-tuning the respective datasets? Could you share any tips for beginners on how to adjust paramet…
-
Can we train xtts v2 with original dataset which is multilingual and multispeaker?
-
It seems this isn't possible? What would be an ideal audio file length for Bark voice cloning if it can only accept a single input? I guess this might be a reason to use Tortoise instead. Usually the …
-
### Describe the bug
I use 8 train examples and 8 eval examples. batch_size is 2, and tow gpus, i run 1 epoch.
but i found i didn't exhaust all the train data in one epoch.
AND each device gets t…
-
First of all, thank you for this wonderful project. I've been playing around with it this past week, both in standalone mode and as a text-generation-webui extension, and it's all working very well. T…
-
Hi! Finetuning results in the below error.
The dataset (1st stage) has been created without a problem.
Could you please help to solve it?
```
>> DVAE weights restored from: D:\PythonProjects\all…