Open a43992899 opened 1 year ago
Can you share what the WER is when you fine-tune the model? It doesn't seem weird to me since the accuracy on masked tokens continues improving and that is what models is optimized for
Hi Did you find a solution to the problem? I get the same decrease in loss. When uploading the open checkpoint released by meta it seems like the unmasked loss is much better.
❓ Questions and Help
What is your question?
I am trying to replicate the HuBERT base pretraining iter1 on librispeech 960hr. However, the training curve seems to be weird, as the unmask correct rate degrades fast. It seems like the model is not converging correctly, is this related to my kmeans? Are there any secret ingredients for training HuBERT? What will be the normal curve when the model is correctly trained?
What's your environment?