CMsmartvoice / One-Shot-Voice-Cloning

:relaxed: One Shot Voice Cloning base on Unet-TTS
237 stars 40 forks source link

请问test_wavs中的.npy文件是如何生成的 #3

Open ZJ-CAI opened 2 years ago

ZJ-CAI commented 2 years ago

Would you please show me how to generate the .npy files in the One-Shot-Voice-Cloning/test_wavs/

CMsmartvoice commented 2 years ago
  1. The npy files in */test_wavs are generated by the MFA tool, but first its corresponding phoneme sequence has to be known.

  2. It is not limited to the above method, but any tool that can predict the duration of articulation can be used, such as the acoustic model of ASR.

  3. The above method can accurately estimate the duration information of the reference audio. For cloning, in fact, the accuracy of duration information is not so demanding, and the result of coarse estimation using manual methods can achieve the same effect. For example, using a speech spectrogram viewing tool, or other audio annotation tools, the duration of phonemes can be estimated audiovisually.

  4. The Style_Encoder in this model is equivalent to an audio frame encoder, where the final output of the network is related to the content only, with phoneme position information embedded in the results. Based on these temporal position encodings, a simple estimation of the phoneme duration of the reference audio can be performed using the Style_Encoder. Better yet, the Style_Encoder method does not require knowledge of the phoneme sequence corresponding to the audio. https://github.com/CMsmartvoice/One-Shot-Voice-Cloning/blob/6beec14888be82ade5164cc9e534f0a0c1ee38f9/TensorFlowTTS/tensorflow_tts/models/moduls/core.py#L700-L705