Closed listener17 closed 1 year ago
model = EncodecModel.encodec_model_24khz() # return the pretrained causal 24khz model
model.set_target_bandwidth(24) # use all of the 32 codebooks (24kpbs)
model.quantizer.vq.layers[5]._codebook.embed # to show the 5+1=6th codebook of RVQ. shape is (1024,128) corresponding to (nbr_entries, dimensionality)
Awesome! Thanks.
Hello,
I'm new to these codecs concepts. I didn't understand clearly, how these Codebooks are generated. Whether these are created on the fly or stored beforehand. Where Can I get complete information on code books creation and usage.
Thank You
Hi @HimaJyothi17, if I may answer your questions:
Whether these are created on the fly or stored beforehand.
The codebooks are created beforehand during training. Then you compute the most appropriate codes from these codebooks for a given audio on-the-fly. It is done by applying the RVQ algorithm on the continuous embedding of the audio signal provided by the Encoder.
Where Can I get complete information on code books creation and usage.
Maybe reading the original paper would be a good starting point :wink:
Thanks for the reply @jhauret !!
I've already gone through the original paper. But I didn't find much info on Codebook Creation. They did mention that, they've used 32 codebooks and are updated with 0.99 decay.
But I need info on how they created those 32 codebooks. Is it predefined with approaches like kmeans or created dynamically during training with random initialization?
You're welcome! So the paper says that they followed the same procedure as Soundstream, which first introduced RVQ. In this Soundstream's paper you can find more details about codebook initialization and updates:
" The codebook of each quantizer is trained with exponential moving average updates, following the method proposed in VQ-VAE-2 [32]. To improve the usage of the codebooks we use two additional methods. First, instead of using a random initialization for the codebook vectors, we run the k-means algorithm on the first training batch and use the learned centroids as initialization. This allows the codebook to be close to the distribution of its inputs and improves its usage. Second, as proposed in [34], when a codebook vector has not been assigned any input frame for several batches, we replace it with an input frame randomly sampled within the current batch. More precisely, we track the exponential moving average of the assignments to each vector (with a decay factor of 0.99) and replace the vectors of which this statistic falls below 2. "
❓ Questions
Thanks for the nice paper/work!
I've a question: How do I print the VQ code vectors of EnCodec?