Traceback (most recent call last):
File "question-generation/interact.py", line 238, in <module>
run()
File "question-generation/interact.py", line 144, in run
model = GPT2LMHeadModel.from_pretrained(args.model_checkpoint)
File "/opt/conda/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling_gpt2.py", line 475, in from_pretrained
"Error(s) in loading state_dict for {}:\n\t{}".format(model.__class__.__name__, "\n\t".join(error_msgs))
RuntimeError: Error(s) in loading state_dict for GPT2LMHeadModel:
size mismatch for transformer.wte.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).
size mismatch for lm_head.decoder.weight: copying a param with shape torch.Size([50265, 768]) from checkpoint, the shape in current model is torch.Size([50257, 768]).
I am able to run easily on local machine but when I tried running on Google colab it throws this error. I am not sure what is wrong.
I am able to run easily on local machine but when I tried running on Google colab it throws this error. I am not sure what is wrong.