modelscope / FunCodec

FunCodec is a research-oriented toolkit for audio quantization and downstream applications, such as text-to-speech synthesis, music generation et.al.
https://funcodec.github.io/
MIT License
370 stars 30 forks source link

ERROR Generating with prompt text and prompt audio #30

Closed pracdl314 closed 7 months ago

pracdl314 commented 8 months ago

Hi, thank you for sharing FunCodec, this is really awesome work!

I ran into the following issue when trying to generate audio using my own prompt audio and prompt text. Please let me know what the nature of this error is and how it can be fixed. Thank you very much!

File "/home/____/FunCodec/funcodec/bin/text2audio_inference.py", line 617, in <module>
    main()
  File "/home/____/FunCodec/funcodec/bin/text2audio_inference.py", line 613, in main
    inference(**kwargs)
  File "/home/____/FunCodec/funcodec/bin/text2audio_inference.py", line 454, in inference
    return inference_pipeline(data_path_and_name_and_type, raw_inputs=kwargs.get("raw_inputs", None))
  File "/home/____/FunCodec/funcodec/bin/text2audio_inference.py", line 400, in _forward
    ret_val, _ = my_model(*model_inputs)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/____/FunCodec/funcodec/bin/text2audio_inference.py", line 218, in __call__
    gen_speech = self.model.syn_audio(
  File "/home/____/FunCodec/funcodec/models/audio_generation/laura_model.py", line 565, in syn_audio
    _, _, recon_wav, _ = codec_model(codec_emb[:, continual_length:], run_mod="decode_emb")
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/____/FunCodec/funcodec/bin/codec_inference.py", line 119, in __call__
    ret_dict = self.model.inference_decoding_emb(*batch)
  File "/home/____/FunCodec/funcodec/models/codec_basic.py", line 829, in inference_decoding_emb
    recon_speech = self._decode(codes)
  File "/home/____/FunCodec/funcodec/models/codec_basic.py", line 390, in _decode
    return self._decode_frame(encoded_frames[0])
  File "/home/____/FunCodec/funcodec/models/codec_basic.py", line 401, in _decode_frame
    out = self.decoder(emb)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/____/FunCodec/funcodec/models/decoder/seanet_decoder.py", line 179, in forward
    y = self.model(z.permute(0, 2, 1))
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/container.py", line 217, in forward
    input = module(input)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/____/FunCodec/funcodec/modules/normed_modules/conv.py", line 259, in forward
    x = self.conv(x)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/____/FunCodec/funcodec/modules/normed_modules/conv.py", line 157, in forward
    x = self.conv(x)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 310, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/____/anaconda3/envs/funcodec2/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 306, in _conv_forward
    return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Calculated padded input size per channel: (6). Kernel size: (7). Kernel size can't be greater than actual input size
ZhihaoDU commented 8 months ago

From the Traceback, I think the error is because your own prompt is too short, may be shorter than 1 second ? Our model consists of convolution layers and padding ops, if the prompt is to short the kernel size and paddings will larger than the input size, resulting into the error you met.

pracdl314 commented 8 months ago

The prompt audio I used was 11 seconds long, could there be another reason why this happens?

ZhihaoDU commented 8 months ago

I found this error is reported by decoder. This may be due to the prompt audio is too long, and the LM decodes a very small number of tokens (bad case, actually). You can try a shorter one. BTW, have you tried the demo prompt audio ? Is it can be used normally ?

ZhihaoDU commented 8 months ago

I recommend a prompt audio of the duration 4~6s. This is because the training set is LibriTTS, a small corpus, in which the length of utterance is not very long.

pracdl314 commented 8 months ago

Thank you for your reply. The demo prompt audio works as expected.

I tried using a prompt audio of 5s, but the same error still occurred. However in this case, the input text, which is the text that the output should be speaking (not to be confused with the prompt text, which is the text of the prompt audio) was one word long. Is there an optimal length for the input text as well? I noticed that for generating a one word long input text without a prompt audio, there are no issues.

Additionally, how can the code be modified to accommodate longer prompt audios and fix this kernel size issue?

pracdl314 commented 8 months ago

Hi, could you please briefly explain what the continual parameter is used for?

ZhihaoDU commented 8 months ago

Thank you for your reply. The demo prompt audio works as expected.

I tried using a prompt audio of 5s, but the same error still occurred. However in this case, the input text, which is the text that the output should be speaking (not to be confused with the prompt text, which is the text of the prompt audio) was one word long. Is there an optimal length for the input text as well? I noticed that for generating a one word long input text without a prompt audio, there are no issues.

Additionally, how can the code be modified to accommodate longer prompt audios and fix this kernel size issue?

NOTE: Since the released model is trained on the LibriTTS model, which contains limited data. The generalization ability is also limited. Therefore, In zero-shot mode, the input text shouldn't be short or long, I recommend the total duration of prompt and generated audios (expected value) matches the training samples (5s~15s). If the prompt text is too long the model will stop decoding early, and the number generated tokens will be very small, resulting in the kernel size error. If the input text is too short, the number generated tokens will be also very small. Without prompt audios, aka, in free generation mode, the input text shouldn't be too short as well.

Tips: You can estimate the duration of generated audios with prompt as follows: duration of generated audios = length of input text * (duration of prompt audio / length of prompt text)

To accommodate longer prompt audios, what you need to do is finetune the model on more data to improve its generalization ability. From the aspect of code, you can:

  1. filter out the too short sentence after model.decode_codec at the line 171 of text2audio_inference.py.
  2. modify the decoding strategy, add penalty to the eos token before achieving the minimum expected length in decode_codec function of LauraGenModel in laura_model.py:

            min_length = None
            if min_length_ratio is not None and prompt_text_lens is not None and continual is not None:
                min_length = int(float(len(continual)) / prompt_text_lens * (
                            text_lengths - prompt_text_lens) * min_length_ratio)
            if max_length_ratio is not None and prompt_text_lens is not None and continual is not None:
                max_length = int(float(len(continual)) / prompt_text_lens * (
                            text_lengths - prompt_text_lens) * max_length_ratio)
    
            if min_length is not None and i < min_length:
                pred[:, self.codebook_size + self.sos_eos] = float(np.finfo(np.float32).min)
ZhihaoDU commented 8 months ago

Hi, could you please briefly explain what the continual parameter is used for?

The continual parameter represents the codec tokens of prompt audios.