Open w-marco opened 1 month ago
I think I've found the issue:
It comes from this line in train.py:
batch_size=config["tts_batch_size"]//7,
Removing the //7 does work and fixes the issue. I'm not sure why this doesn't work, but removing it fixed my reported bug.
That is a little odd; are you generating a fairly small number of samples? The //7
is mostly there for GPUs so that too much VRAM is not used to negative clip generation, but you raise a good point that this isn't needed for CPU-only training.
No, I tried generating like 50k samples, so I am not sure. It seemed like some Python bug, as other files with the //
divider also had issues until I removed those.
I am using the automatic model training notebook and it works up until where the negative clips should be generated. Whereas with positive samples it logs smth. like:
with the negative clips it will only log:
Of course nothing is generated and due to the files being missing, the next step where the clips should get augmented naturally fails. I tried running the steps in Colab and locally on Linux and the error remains the same.
Any ideas why it just refuses to generate negative clips?