facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
29.86k stars 6.32k forks source link

Help with replicating the results for Hubert Pretraining #4742

Open a43992899 opened 1 year ago

a43992899 commented 1 year ago

❓ Questions and Help

What is your question?

I am trying to replicate the HuBERT base pretraining iter1 on librispeech 960hr. However, the training curve seems to be weird, as the unmask correct rate degrades fast. It seems like the model is not converging correctly, is this related to my kmeans? Are there any secret ingredients for training HuBERT? What will be the normal curve when the model is correctly trained?

image image-2

What's your environment?

wnhsu commented 1 year ago

Can you share what the WER is when you fine-tune the model? It doesn't seem weird to me since the accuracy on masked tokens continues improving and that is what models is optimized for

hadas commented 3 months ago

Hi Did you find a solution to the problem? I get the same decrease in loss. When uploading the open checkpoint released by meta it seems like the unmasked loss is much better.