ming024 / FastSpeech2

An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"
MIT License
1.69k stars 515 forks source link

Unused character embeddings? #209

Open g-milis opened 10 months ago

g-milis commented 10 months ago

I noticed that since each utterance is converted to a phoneme sequence, the character embeddings are never used. A quick visualization with 2D PCA shows that the embeddings corresponding to A-Z and a-z seem random, while the phoneme embeddings have a meaningful structure. Is that intentional?

PCA_of_TTS_characters PCA_of_TTS_phonemes

wabmhnsbn commented 10 months ago

Hi,sorry to bother you.I'm a student majored in Linguistic and NLP. Recently i had confront a challenge about extracting feature vectors in phonemes and characters. I would like to know if there is a pre-training model to get the feature vector for each phoneme and character.It seems that the "PAC of TTS Phonemes/Characters" you provided is close to my requirements! I would like to know if you have any suggestions for this.

g-milis commented 9 months ago

@wabmhnsbn the visualizations above are 2D projections (with PCA) of the trainable 256D embeddings, defined in the model's encoder. I just accessed them with model.encoder.src_word_emb.weight. Note that the model has to be trained otherwise they will be random.

wabmhnsbn commented 9 months ago

@wabmhnsbn the visualizations above are 2D projections (with PCA) of the trainable 256D embeddings, defined in the model's encoder. I just accessed them with model.encoder.src_word_emb.weight. Note that the model has to be trained otherwise they will be random.

Okay, I'll give it a try. Thank you!

asarsembayev commented 2 months ago

Great question! I have the same.