Open TaridaGeorge opened 3 years ago
optimization.update_freq='[x]' where x = 64/k should belong to the pre-train step
And 24 should belong to finetuning? Is it 24 or 8? I saw that for the base model they've used 8 GPUs and for the large model 24.
Yup! the number should follow the wa2vec repo instruction.
I was wondering why in the finetune.py file you've set update_freq to be 24/NUM_GPU.
In the wav2vec Readme https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md they say that the base model was trained using 64 V100 GPUs and as I understood if we want to do more training on the base model we should simulate the number of the GPUs they've used.
Have you found that setting update_freq to be 24/NUM_GPU is better for training or is it a bug?