rungjoo / CoMPM

Context Modeling with Speaker's Pre-trained Memory Tracking for Emotion Recognition in Conversation (NAACL 2022)
62 stars 14 forks source link

Unexpected key(s) in state_dict: "context_model.embeddings.position_ids", "speaker_model.embeddings.position_ids" #10

Closed utility-aagrawal closed 12 months ago

utility-aagrawal commented 1 year ago

Hi,

I am trying to mimic your results and am getting the following error: image

Any idea what could be causing this error?

Note that I have the latest version of torch (2.0.1), not 1.8 because I couldn't find its build anywhere.

Also, the warning about RobertaModel not initialized from the checkpoint -> what does that mean?

Thank you for your help!

utility-aagrawal commented 1 year ago

Never mind! I was able to resolve this issue using this link - https://discuss.pytorch.org/t/missing-keys-unexpected-keys-in-state-dict-when-loading-self-trained-model/22379/4

utility-aagrawal commented 1 year ago

Reopening the issue because I would still like to hear from you on the following warning: Also, the warning about RobertaModel not initialized from the checkpoint -> what does that mean?

rungjoo commented 1 year ago

Sorry for my belated reply.

This problem occurred when the Hugging Face version was updated. In the version I used, the key was not included in the model. This problem occurs because the key is included in recent versions.

This is not a problem if you are training the model from scratch. However, if you use a trained model, you must use strict=False.

Thank you for bringing up this point.