jishengpeng / WavTokenizer

SOTA discrete acoustic codec models with 40 tokens per second for audio language modeling
MIT License
686 stars 39 forks source link

Why so large commit loss weight #36

Open Ming-er opened 2 weeks ago

Ming-er commented 2 weeks ago

Hi, author. I find that in the training code, the commit loss weight is set to 1000 which is much higher than that of encodec and speechtokenizer, why so large commit loss weight? Will it contribute to higher code usage or trigger some training instability? Thanks~

jishengpeng commented 1 week ago

Hi, author. I find that in the training code, the commit loss weight is set to 1000 which is much higher than that of encodec and speechtokenizer, why so large commit loss weight? Will it contribute to higher code usage or trigger some training instability? Thanks~

This parameter was configured a long time ago, and I assume it was initially intended to ensure that the various loss functions could remain on a similar order of magnitude.

Ming-er commented 1 week ago

Thanks for your reply. Besides, I also want to know how to compute the codebook usage rate?

jishengpeng commented 1 week ago

Thanks for your reply. Besides, I also want to know how to compute the codebook usage rate?

During the inference process, the codebook utilization can be calculated by recording the occurrence frequency of each codebook entry using the LibriTTS test-clean dataset.

Ming-er commented 1 week ago

Thanks for your reply. Besides, I also want to know how to compute the codebook usage rate?

During the inference process, the codebook utilization can be calculated by recording the occurrence frequency of each codebook entry using the LibriTTS test-clean dataset.

So the threshold to classify a code as used is set as 1?