Division by 0 bug is because of having 0 batches for your eval data. I
assume your fine tuning samples are very few, resulting in 5% being rounded
down to 0 batches. Supposing you use batch_size=32 your overall finetuning
samples are around 600 samples?
To overcome that, set "test_size" to None and "test_batches=10" for example
or whatever number of batches you want to use for validation. That should
do.
i have 200 samples ,and i set tacotron_test_size = None, and tacotron_test_batches = 10,and the error log is "test_size=640 should be either positive and smaller than the number of samples 191 or a float in the (0, 1) range"
so i set tacotron_test_size = None, and tacotron_test_batches = 1, now it is work ,but I don't know if that's right.Does this cause a Over-fitting?
Division by 0 bug is because of having 0 batches for your eval data. I assume your fine tuning samples are very few, resulting in 5% being rounded down to 0 batches. Supposing you use batch_size=32 your overall finetuning samples are around 600 samples?
To overcome that, set "test_size" to None and "test_batches=10" for example or whatever number of batches you want to use for validation. That should do.
i have 200 samples ,and i set tacotron_test_size = None, and tacotron_test_batches = 10,and the error log is "test_size=640 should be either positive and smaller than the number of samples 191 or a float in the (0, 1) range" so i set tacotron_test_size = None, and tacotron_test_batches = 1, now it is work ,but I don't know if that's right.Does this cause a Over-fitting?