Closed mausamsion closed 6 years ago
Sorry, just found your comment on this issue.
Hello, @llrootll Yes, as transformer uses embeddings of the words we also uses embeddings in g2p-seq2seq. But, in our case, we uses embeddings of the graphemes and phonemes instead of the embeddings of the words. Because of transformer model trains embedding layer itself you don't need to worry about it.
Hi, As you mentioned in the readme that current version uses the attention mechanism from the transformer model. As in the transformer paper they use embeddings of the words as input. Does the current version of g2p-seq2seq also uses the input as vector embeddings of the tokens (in this case characters) or the encoder sees the tokens as they are ?