BenAAndrew / Voice-Cloning-App

A Python/Pytorch app for easily synthesising human voices
BSD 3-Clause "New" or "Revised" License
1.39k stars 233 forks source link

Warning! Reached max decoder steps. Either the model is low quality or the given sentence is too short/long #154

Closed audioses closed 1 year ago

audioses commented 1 year ago

Hello there, I am having this issue in the synthisiss step Type: Exception Text: Warning! Reached max decoder steps. Either the model is low quality or the given sentence is too short/long Full: Traceback (most recent call last): File "flask\app.py", line 1950, in full_dispatch_request File "flask\app.py", line 1936, in dispatch_request File "application\views.py", line 353, in synthesis_post File "synthesis\synthesize.py", line 196, in synthesize File "training\tacotron2_model\model.py", line 604, in inference File "training\tacotron2_model\model.py", line 493, in inference Exception: Warning! Reached max decoder steps. Either the model is low quality or the given sentence is too short/long What can I do to resolve the issue? I searched all over the internet but could not find a salution. Thanks much If the issue si with my model, hwere can I find other models to test this with? I cannot share the model I built because İt is a celebrity voice which I am not authorised to share. Thanks much for helping!

audioses commented 1 year ago

Hello, I resolved the issue thanks to some people who helped me from Discord. For those who use google colab to train and mightencounter the issue in the future, I am sharing the salution. The mistake I did was I set the transferLearningpath variable to None in the train cell of the colab notebook so it did not transfer anything over to the actual model. The reason I did was it was tracebacking because it could not find any checkpoint to start with once you start the training process. To avoid that from happening, The code in the original google colab should be altered.Hello, I resolved the issue thanks to some people who helped me from Discord. For those who use google colab to train and mightencounter the issue in the future, I am sharing the salution. The mistake I did was I set the transferlearningpath variable to None so it did not transfer anything over to the actual model. The reason I did was it was tracebacking because it could not find any checkpoint to start with once you start the training process. To avoid that from happening, The code in the original google colab should be altered.