facebookresearch / encodec

State-of-the-art deep learning based audio codec supporting both mono 24 kHz audio and stereo 48 kHz audio.
MIT License
3.52k stars 304 forks source link

VQ code vectors #45

Closed listener17 closed 1 year ago

listener17 commented 1 year ago

❓ Questions

Thanks for the nice paper/work!

I've a question: How do I print the VQ code vectors of EnCodec?

jhauret commented 1 year ago
model = EncodecModel.encodec_model_24khz()  # return the pretrained causal 24khz model
model.set_target_bandwidth(24)  # use all of the 32 codebooks (24kpbs)
model.quantizer.vq.layers[5]._codebook.embed # to show the 5+1=6th codebook of RVQ. shape is (1024,128) corresponding to (nbr_entries, dimensionality)
listener17 commented 1 year ago

Awesome! Thanks.

HimaJyothi17 commented 9 months ago

Hello,

I'm new to these codecs concepts. I didn't understand clearly, how these Codebooks are generated. Whether these are created on the fly or stored beforehand. Where Can I get complete information on code books creation and usage.

Thank You

jhauret commented 9 months ago

Hi @HimaJyothi17, if I may answer your questions:

Whether these are created on the fly or stored beforehand.

The codebooks are created beforehand during training. Then you compute the most appropriate codes from these codebooks for a given audio on-the-fly. It is done by applying the RVQ algorithm on the continuous embedding of the audio signal provided by the Encoder.


Where Can I get complete information on code books creation and usage.

Maybe reading the original paper would be a good starting point :wink:

HimaJyothi17 commented 9 months ago

Thanks for the reply @jhauret !!

I've already gone through the original paper. But I didn't find much info on Codebook Creation. They did mention that, they've used 32 codebooks and are updated with 0.99 decay.

But I need info on how they created those 32 codebooks. Is it predefined with approaches like kmeans or created dynamically during training with random initialization?

jhauret commented 9 months ago

You're welcome! So the paper says that they followed the same procedure as Soundstream, which first introduced RVQ. In this Soundstream's paper you can find more details about codebook initialization and updates:

" The codebook of each quantizer is trained with exponential moving average updates, following the method proposed in VQ-VAE-2 [32]. To improve the usage of the codebooks we use two additional methods. First, instead of using a random initialization for the codebook vectors, we run the k-means algorithm on the first training batch and use the learned centroids as initialization. This allows the codebook to be close to the distribution of its inputs and improves its usage. Second, as proposed in [34], when a codebook vector has not been assigned any input frame for several batches, we replace it with an input frame randomly sampled within the current batch. More precisely, we track the exponential moving average of the assignments to each vector (with a decay factor of 0.99) and replace the vectors of which this statistic falls below 2. "