Open amant555 opened 4 years ago
is this still an issue? plz reopen if so
I faced the same issue, even with 'transformer_lm.wmt19.en'. My setup: torch: 1.9.0 from pip fairseq: from master CUDA Version: 11.2 (Although it's irrelevant, I've tested this on GPU too) The problem arises from checkpoint = torch.load(args.kenlm_model, map_location="cpu"). Here the checkpoint is a dict. However, it is treated as a OmegaConf. If we check the type of checkpoint in line 383 and cast it, we won't face this problem:
if "cfg" in checkpoint and checkpoint["cfg"] is not None:
if isinstance(checkpoint["cfg"], dict):
lm_args = OmegaConf.create(checkpoint["cfg"])
else:
lm_args = checkpoint["cfg"]
@alexeib, it is a very minor error. But, I would like to make a pull request.
please make a pull request @max-15s, thanks!
🐛 Bug
Hi, I trained a transformer LM model for inferencing in Wav2vec 2.0. It gave a similar error of getting None in
args
in model dict. Like in previous similar issue, I changed the value ofstate["args"]
tostate["cfg"]["model"]
but the data key was missing in that. So again changed it tostate["cfg"]["tasks"]
where it gave a new error ofmissing key language_modeling
. It's probably a issue because of moving repo to hydra config. I tried to find a solution for this, but there is distribution of keys in different namespaces that are needed.Can you look into it. Mainly the error is arising from init method of fairseqlm in w2l_decoder.py.