lucidrains / audiolm-pytorch

Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
MIT License
2.32k stars 249 forks source link

Question: Random semantic embedding in SemanticTransformer? #249

Open stg1205 opened 7 months ago

stg1205 commented 7 months ago

When we already got the semantic token ids by HubertKmeans, the semantic embeddings are calculated using a randomly initialized embedding layer in SemanticTransformer. So why don't use the cluster centroids of pre-trained Hubert as the embedding?

biendltb commented 5 months ago

The idea of the attention mechanism in the transformer network is to capture the relationship between token ids. Those semantic embeddings are randomly initialized but will be trained or will learn to capture the relationship between tokens in the training process.