Open unendin opened 5 years ago
I've got the same error while training fairseq. In my case It was the checkpoint issue. I tried to use the checkpoint pretrained further from wav2vec_small.pt in other device. in my new device it was fine pretraining with wav2vec_small.pt or without pretrained checkpoint. only the checkpoint I further pretrained was not working
The docs suggest that 'apex.optimizers.FusedAdam may be used as a drop-in replacement for torch.optim.Adam'. However, when I load an optimizer state_dict generated by Adam, then resume training, I get:
File "/opt/conda/lib/python3.7/site-packages/apex/optimizers/fused_adam.py", line 105, in step bias_correction = 1 if group['bias_correction'] else 0 KeyError: 'bias_correction'