Closed alealv closed 4 years ago
I tried with --normalize
and it worked š
its guard rails to make sure data normalization is the same when pre-training and finetuning. newly released vox models were pretrained with pre-normalized data (and no group norm in encoder), whereas the librispeech ones are still using the old encoder that normalizes in the forward pass (and therefore should be finetuned without --normalize)
ā Questions and Help
Hi, I'm trying to follow this tutorial for Fine-Tunning wav2vecv2.0 model.
So far I've created a manifest with: dev.ltr, dev.tsv, dev.wrd, dict.ltr.txt, train.ltr, train.tsv, train.wrd and set the options to (AFAIK) with the correct values.
But I get the following error telling that
normalized
should be equals to the model's value, which is TrueAlthough, The registered model
Wav2Vec_Ctc
doesn't have an argument to setnormalize
to True.What's your environment?
pip
, source): pip