Closed pracdl314 closed 7 months ago
From the Traceback, I think the error is because your own prompt is too short, may be shorter than 1 second ? Our model consists of convolution layers and padding ops, if the prompt is to short the kernel size and paddings will larger than the input size, resulting into the error you met.
The prompt audio I used was 11 seconds long, could there be another reason why this happens?
I found this error is reported by decoder. This may be due to the prompt audio is too long, and the LM decodes a very small number of tokens (bad case, actually). You can try a shorter one. BTW, have you tried the demo prompt audio ? Is it can be used normally ?
I recommend a prompt audio of the duration 4~6s. This is because the training set is LibriTTS, a small corpus, in which the length of utterance is not very long.
Thank you for your reply. The demo prompt audio works as expected.
I tried using a prompt audio of 5s, but the same error still occurred. However in this case, the input text, which is the text that the output should be speaking (not to be confused with the prompt text, which is the text of the prompt audio) was one word long. Is there an optimal length for the input text as well? I noticed that for generating a one word long input text without a prompt audio, there are no issues.
Additionally, how can the code be modified to accommodate longer prompt audios and fix this kernel size issue?
Hi, could you please briefly explain what the continual parameter is used for?
Thank you for your reply. The demo prompt audio works as expected.
I tried using a prompt audio of 5s, but the same error still occurred. However in this case, the input text, which is the text that the output should be speaking (not to be confused with the prompt text, which is the text of the prompt audio) was one word long. Is there an optimal length for the input text as well? I noticed that for generating a one word long input text without a prompt audio, there are no issues.
Additionally, how can the code be modified to accommodate longer prompt audios and fix this kernel size issue?
NOTE: Since the released model is trained on the LibriTTS model, which contains limited data. The generalization ability is also limited. Therefore, In zero-shot mode, the input text shouldn't be short or long, I recommend the total duration of prompt and generated audios (expected value) matches the training samples (5s~15s). If the prompt text is too long the model will stop decoding early, and the number generated tokens will be very small, resulting in the kernel size error. If the input text is too short, the number generated tokens will be also very small. Without prompt audios, aka, in free generation mode, the input text shouldn't be too short as well.
Tips: You can estimate the duration of generated audios with prompt as follows: duration of generated audios = length of input text * (duration of prompt audio / length of prompt text)
To accommodate longer prompt audios, what you need to do is finetune the model on more data to improve its generalization ability. From the aspect of code, you can:
model.decode_codec
at the line 171 of text2audio_inference.py
.modify the decoding strategy, add penalty to the eos token before achieving the minimum expected length in decode_codec
function of LauraGenModel
in laura_model.py
:
min_length = None
if min_length_ratio is not None and prompt_text_lens is not None and continual is not None:
min_length = int(float(len(continual)) / prompt_text_lens * (
text_lengths - prompt_text_lens) * min_length_ratio)
if max_length_ratio is not None and prompt_text_lens is not None and continual is not None:
max_length = int(float(len(continual)) / prompt_text_lens * (
text_lengths - prompt_text_lens) * max_length_ratio)
if min_length is not None and i < min_length:
pred[:, self.codebook_size + self.sos_eos] = float(np.finfo(np.float32).min)
Hi, could you please briefly explain what the continual parameter is used for?
The continual
parameter represents the codec tokens of prompt audios.
Hi, thank you for sharing FunCodec, this is really awesome work!
I ran into the following issue when trying to generate audio using my own prompt audio and prompt text. Please let me know what the nature of this error is and how it can be fixed. Thank you very much!