Closed LLianJJun closed 4 years ago
Hi~
Alignment information was obtained using a tacorton2 or transformer model, but it has been removed from git. Could you please tell me why? As you know, alignment information is required to use a DB other than ljspeech.
I didn't spend time writing a note about how to extract alignment from Tacotron2, so I remove it from new commit.
@xcmyz 请问一下中文音频的alignment如何获取npy信息呢???请赐教
@xcmyz 请问一下中文音频的alignment如何获取npy信息呢???请赐教
通过强制对齐工具获取音频和音素的对齐。
@xcmyz 我这边已经对齐过了,通过kaldi,但是npy是怎么生成的,我看是用tensorflow版本的tacotron2生成的吗?是training_data里的吗?
@xcmyz 我这边已经对齐过了,通过kaldi,但是npy是怎么生成的,我看是用tensorflow版本的tacotron2生成的吗?是training_data里的吗?
我用nvidia版本的tacotron2或者每一个character对应的mel谱的帧。
嗯嗯,谢谢,请问这个npy是再tensorflow版本的tacotron2的training_data里吗?是audio 还是linear 还是mels里面,这个困惑我好久了,谢谢!!
嗯嗯,谢谢,请问这个npy是再tensorflow版本的tacotron2的training_data里吗?是audio 还是linear 还是mels里面,这个困惑我好久了,谢谢!!
我是通过tacotron2的模型中提取的,比方说:
我这里的duration即为每个character对应的帧长度,比如[h, a, t]的duration就是[2, 3, 4]。(我用的是pytorch版本的nvidia tacotron2)
好的好的,谢谢,不胜感激!
Hi~
Alignment information was obtained using a tacorton2 or transformer model, but it has been removed from git. Could you please tell me why? As you know, alignment information is required to use a DB other than ljspeech.