xcmyz / FastSpeech

The Implementation of FastSpeech based on pytorch.
MIT License
858 stars 213 forks source link

How to get alignment? #82

Closed LLianJJun closed 4 years ago

LLianJJun commented 4 years ago

Hi~

Alignment information was obtained using a tacorton2 or transformer model, but it has been removed from git. Could you please tell me why? As you know, alignment information is required to use a DB other than ljspeech.

xcmyz commented 4 years ago

Hi~

Alignment information was obtained using a tacorton2 or transformer model, but it has been removed from git. Could you please tell me why? As you know, alignment information is required to use a DB other than ljspeech.

I didn't spend time writing a note about how to extract alignment from Tacotron2, so I remove it from new commit.

c9412600 commented 4 years ago

@xcmyz 请问一下中文音频的alignment如何获取npy信息呢???请赐教

xcmyz commented 4 years ago

@xcmyz 请问一下中文音频的alignment如何获取npy信息呢???请赐教

通过强制对齐工具获取音频和音素的对齐。

c9412600 commented 4 years ago

@xcmyz 我这边已经对齐过了,通过kaldi,但是npy是怎么生成的,我看是用tensorflow版本的tacotron2生成的吗?是training_data里的吗?

xcmyz commented 4 years ago

@xcmyz 我这边已经对齐过了,通过kaldi,但是npy是怎么生成的,我看是用tensorflow版本的tacotron2生成的吗?是training_data里的吗?

我用nvidia版本的tacotron2或者每一个character对应的mel谱的帧。

c9412600 commented 4 years ago

嗯嗯,谢谢,请问这个npy是再tensorflow版本的tacotron2的training_data里吗?是audio 还是linear 还是mels里面,这个困惑我好久了,谢谢!!

xcmyz commented 4 years ago

嗯嗯,谢谢,请问这个npy是再tensorflow版本的tacotron2的training_data里吗?是audio 还是linear 还是mels里面,这个困惑我好久了,谢谢!!

我是通过tacotron2的模型中提取的,比方说:

  1. nvidia tacotron2的模型输入时character,输出是mel spectrogram;
  2. tacotron2中location sensitive attention会输出一个矩阵(shape:[length_mel, length_character])
  3. 上面这个矩阵包含了每个character和mel spectrogram的对齐(每一个character的attention到哪一个帧的mel spectrogram)

我这里的duration即为每个character对应的帧长度,比如[h, a, t]的duration就是[2, 3, 4]。(我用的是pytorch版本的nvidia tacotron2

c9412600 commented 4 years ago

好的好的,谢谢,不胜感激!