FreedomIntelligence / ChatGPT-Detection-PR-HPPT

Codes and dataset for the paper: Is ChatGPT Involved in Texts? Measure the Polish Ratio to Detect ChatGPT-Generated Text
9 stars 1 forks source link

Cannot load fine-tuned weights: `Missing key(s) in state_dict: "roberta.embeddings.position_ids", [...]` #4

Open alelom opened 9 months ago

alelom commented 9 months ago

I'm getting the following error when attempting to use the downloaded PR_model and replacing the best_model.pt with PR_model_MSE.pt in model_path in inference.py.

Error(s) in loading state_dict for RobertaForSequenceClassification:
        Missing key(s) in state_dict: "roberta.embeddings.position_ids", "roberta.embeddings.word_embeddings.weight", 
[...]

Please note that I have made sure of making a separate environment and installing all the requirements from requirements.txt.

Full error:

``` Error(s) in loading state_dict for RobertaForSequenceClassification: Missing key(s) in state_dict: "roberta.embeddings.position_ids", "roberta.embeddings.word_embeddings.weight", "roberta.embeddings.position_embeddings.weight", "roberta.embeddings.token_type_embeddings.weight", "roberta.embeddings.LayerNorm.weight", "roberta.embeddings.LayerNorm.bias", "roberta.encoder.layer.0.attention.self.query.weight", "roberta.encoder.layer.0.attention.self.query.bias", "roberta.encoder.layer.0.attention.self.key.weight", "roberta.encoder.layer.0.attention.self.key.bias", "roberta.encoder.layer.0.attention.self.value.weight", "roberta.encoder.layer.0.attention.self.value.bias", "roberta.encoder.layer.0.attention.output.dense.weight", "roberta.encoder.layer.0.attention.output.dense.bias", "roberta.encoder.layer.0.attention.output.LayerNorm.weight", "roberta.encoder.layer.0.attention.output.LayerNorm.bias", "roberta.encoder.layer.0.intermediate.dense.weight", "roberta.encoder.layer.0.intermediate.dense.bias", "roberta.encoder.layer.0.output.dense.weight", "roberta.encoder.layer.0.output.dense.bias", "roberta.encoder.layer.0.output.LayerNorm.weight", "roberta.encoder.layer.0.output.LayerNorm.bias", "roberta.encoder.layer.1.attention.self.query.weight", "roberta.encoder.layer.1.attention.self.query.bias", "roberta.encoder.layer.1.attention.self.key.weight", "roberta.encoder.layer.1.attention.self.key.bias", "roberta.encoder.layer.1.attention.self.value.weight", "roberta.encoder.layer.1.attention.self.value.bias", "roberta.encoder.layer.1.attention.output.dense.weight", "roberta.encoder.layer.1.attention.output.dense.bias", "roberta.encoder.layer.1.attention.output.LayerNorm.weight", "roberta.encoder.layer.1.attention.output.LayerNorm.bias", "roberta.encoder.layer.1.intermediate.dense.weight", "roberta.encoder.layer.1.intermediate.dense.bias", "roberta.encoder.layer.1.output.dense.weight", "roberta.encoder.layer.1.output.dense.bias", "roberta.encoder.layer.1.output.LayerNorm.weight", "roberta.encoder.layer.1.output.LayerNorm.bias", "roberta.encoder.layer.2.attention.self.query.weight", "roberta.encoder.layer.2.attention.self.query.bias", "roberta.encoder.layer.2.attention.self.key.weight", "roberta.encoder.layer.2.attention.self.key.bias", "roberta.encoder.layer.2.attention.self.value.weight", "roberta.encoder.layer.2.attention.self.value.bias", "roberta.encoder.layer.2.attention.output.dense.weight", "roberta.encoder.layer.2.attention.output.dense.bias", "roberta.encoder.layer.2.attention.output.LayerNorm.weight", "roberta.encoder.layer.2.attention.output.LayerNorm.bias", "roberta.encoder.layer.2.intermediate.dense.weight", "roberta.encoder.layer.2.intermediate.dense.bias", "roberta.encoder.layer.2.output.dense.weight", "roberta.encoder.layer.2.output.dense.bias", "roberta.encoder.layer.2.output.LayerNorm.weight", "roberta.encoder.layer.2.output.LayerNorm.bias", "roberta.encoder.layer.3.attention.self.query.weight", "roberta.encoder.layer.3.attention.self.query.bias", "roberta.encoder.layer.3.attention.self.key.weight", "roberta.encoder.layer.3.attention.self.key.bias", "roberta.encoder.layer.3.attention.self.value.weight", "roberta.encoder.layer.3.attention.self.value.bias", "roberta.encoder.layer.3.attention.output.dense.weight", "roberta.encoder.layer.3.attention.output.dense.bias", "roberta.encoder.layer.3.attention.output.LayerNorm.weight", "roberta.encoder.layer.3.attention.output.LayerNorm.bias", "roberta.encoder.layer.3.intermediate.dense.weight", "roberta.encoder.layer.3.intermediate.dense.bias", "roberta.encoder.layer.3.output.dense.weight", "roberta.encoder.layer.3.output.dense.bias", "roberta.encoder.layer.3.output.LayerNorm.weight", "roberta.encoder.layer.3.output.LayerNorm.bias", "roberta.encoder.layer.4.attention.self.query.weight", "roberta.encoder.layer.4.attention.self.query.bias", "roberta.encoder.layer.4.attention.self.key.weight", "roberta.encoder.layer.4.attention.self.key.bias", "roberta.encoder.layer.4.attention.self.value.weight", "roberta.encoder.layer.4.attention.self.value.bias", "roberta.encoder.layer.4.attention.output.dense.weight", "roberta.encoder.layer.4.attention.output.dense.bias", "roberta.encoder.layer.4.attention.output.LayerNorm.weight", "roberta.encoder.layer.4.attention.output.LayerNorm.bias", "roberta.encoder.layer.4.intermediate.dense.weight", "roberta.encoder.layer.4.intermediate.dense.bias", "roberta.encoder.layer.4.output.dense.weight", "roberta.encoder.layer.4.output.dense.bias", "roberta.encoder.layer.4.output.LayerNorm.weight", "roberta.encoder.layer.4.output.LayerNorm.bias", "roberta.encoder.layer.5.attention.self.query.weight", "roberta.encoder.layer.5.attention.self.query.bias", "roberta.encoder.layer.5.attention.self.key.weight", "roberta.encoder.layer.5.attention.self.key.bias", "roberta.encoder.layer.5.attention.self.value.weight", "roberta.encoder.layer.5.attention.self.value.bias", "roberta.encoder.layer.5.attention.output.dense.weight", "roberta.encoder.layer.5.attention.output.dense.bias", "roberta.encoder.layer.5.attention.output.LayerNorm.weight", "roberta.encoder.layer.5.attention.output.LayerNorm.bias", "roberta.encoder.layer.5.intermediate.dense.weight", "roberta.encoder.layer.5.intermediate.dense.bias", "roberta.encoder.layer.5.output.dense.weight", "roberta.encoder.layer.5.output.dense.bias", "roberta.encoder.layer.5.output.LayerNorm.weight", "roberta.encoder.layer.5.output.LayerNorm.bias", "roberta.encoder.layer.6.attention.self.query.weight", "roberta.encoder.layer.6.attention.self.query.bias", "roberta.encoder.layer.6.attention.self.key.weight", "roberta.encoder.layer.6.attention.self.key.bias", "roberta.encoder.layer.6.attention.self.value.weight", "roberta.encoder.layer.6.attention.self.value.bias", "roberta.encoder.layer.6.attention.output.dense.weight", "roberta.encoder.layer.6.attention.output.dense.bias", "roberta.encoder.layer.6.attention.output.LayerNorm.weight", "roberta.encoder.layer.6.attention.output.LayerNorm.bias", "roberta.encoder.layer.6.intermediate.dense.weight", "roberta.encoder.layer.6.intermediate.dense.bias", "roberta.encoder.layer.6.output.dense.weight", "roberta.encoder.layer.6.output.dense.bias", "roberta.encoder.layer.6.output.LayerNorm.weight", "roberta.encoder.layer.6.output.LayerNorm.bias", "roberta.encoder.layer.7.attention.self.query.weight", "roberta.encoder.layer.7.attention.self.query.bias", "roberta.encoder.layer.7.attention.self.key.weight", "roberta.encoder.layer.7.attention.self.key.bias", "roberta.encoder.layer.7.attention.self.value.weight", "roberta.encoder.layer.7.attention.self.value.bias", "roberta.encoder.layer.7.attention.output.dense.weight", "roberta.encoder.layer.7.attention.output.dense.bias", "roberta.encoder.layer.7.attention.output.LayerNorm.weight", "roberta.encoder.layer.7.attention.output.LayerNorm.bias", "roberta.encoder.layer.7.intermediate.dense.weight", "roberta.encoder.layer.7.intermediate.dense.bias", "roberta.encoder.layer.7.output.dense.weight", "roberta.encoder.layer.7.output.dense.bias", "roberta.encoder.layer.7.output.LayerNorm.weight", "roberta.encoder.layer.7.output.LayerNorm.bias", "roberta.encoder.layer.8.attention.self.query.weight", "roberta.encoder.layer.8.attention.self.query.bias", "roberta.encoder.layer.8.attention.self.key.weight", "roberta.encoder.layer.8.attention.self.key.bias", "roberta.encoder.layer.8.attention.self.value.weight", "roberta.encoder.layer.8.attention.self.value.bias", "roberta.encoder.layer.8.attention.output.dense.weight", "roberta.encoder.layer.8.attention.output.dense.bias", "roberta.encoder.layer.8.attention.output.LayerNorm.weight", "roberta.encoder.layer.8.attention.output.LayerNorm.bias", "roberta.encoder.layer.8.intermediate.dense.weight", "roberta.encoder.layer.8.intermediate.dense.bias", "roberta.encoder.layer.8.output.dense.weight", "roberta.encoder.layer.8.output.dense.bias", "roberta.encoder.layer.8.output.LayerNorm.weight", "roberta.encoder.layer.8.output.LayerNorm.bias", "roberta.encoder.layer.9.attention.self.query.weight", "roberta.encoder.layer.9.attention.self.query.bias", "roberta.encoder.layer.9.attention.self.key.weight", "roberta.encoder.layer.9.attention.self.key.bias", "roberta.encoder.layer.9.attention.self.value.weight", "roberta.encoder.layer.9.attention.self.value.bias", "roberta.encoder.layer.9.attention.output.dense.weight", "roberta.encoder.layer.9.attention.output.dense.bias", "roberta.encoder.layer.9.attention.output.LayerNorm.weight", "roberta.encoder.layer.9.attention.output.LayerNorm.bias", "roberta.encoder.layer.9.intermediate.dense.weight", "roberta.encoder.layer.9.intermediate.dense.bias", "roberta.encoder.layer.9.output.dense.weight", "roberta.encoder.layer.9.output.dense.bias", "roberta.encoder.layer.9.output.LayerNorm.weight", "roberta.encoder.layer.9.output.LayerNorm.bias", "roberta.encoder.layer.10.attention.self.query.weight", "roberta.encoder.layer.10.attention.self.query.bias", "roberta.encoder.layer.10.attention.self.key.weight", "roberta.encoder.layer.10.attention.self.key.bias", "roberta.encoder.layer.10.attention.self.value.weight", "roberta.encoder.layer.10.attention.self.value.bias", "roberta.encoder.layer.10.attention.output.dense.weight", "roberta.encoder.layer.10.attention.output.dense.bias", "roberta.encoder.layer.10.attention.output.LayerNorm.weight", "roberta.encoder.layer.10.attention.output.LayerNorm.bias", "roberta.encoder.layer.10.intermediate.dense.weight", "roberta.encoder.layer.10.intermediate.dense.bias", "roberta.encoder.layer.10.output.dense.weight", "roberta.encoder.layer.10.output.dense.bias", "roberta.encoder.layer.10.output.LayerNorm.weight", "roberta.encoder.layer.10.output.LayerNorm.bias", "roberta.encoder.layer.11.attention.self.query.weight", "roberta.encoder.layer.11.attention.self.query.bias", "roberta.encoder.layer.11.attention.self.key.weight", "roberta.encoder.layer.11.attention.self.key.bias", "roberta.encoder.layer.11.attention.self.value.weight", "roberta.encoder.layer.11.attention.self.value.bias", "roberta.encoder.layer.11.attention.output.dense.weight", "roberta.encoder.layer.11.attention.output.dense.bias", "roberta.encoder.layer.11.attention.output.LayerNorm.weight", "roberta.encoder.layer.11.attention.output.LayerNorm.bias", "roberta.encoder.layer.11.intermediate.dense.weight", "roberta.encoder.layer.11.intermediate.dense.bias", "roberta.encoder.layer.11.output.dense.weight", "roberta.encoder.layer.11.output.dense.bias", "roberta.encoder.layer.11.output.LayerNorm.weight", "roberta.encoder.layer.11.output.LayerNorm.bias", "classifier.dense.weight", "classifier.dense.bias", "classifier.out_proj.weight", "classifier.out_proj.bias". Unexpected key(s) in state_dict: "model.roberta.embeddings.position_ids", "model.roberta.embeddings.word_embeddings.weight", "model.roberta.embeddings.position_embeddings.weight", "model.roberta.embeddings.token_type_embeddings.weight", "model.roberta.embeddings.LayerNorm.weight", "model.roberta.embeddings.LayerNorm.bias", "model.roberta.encoder.layer.0.attention.self.query.weight", "model.roberta.encoder.layer.0.attention.self.query.bias", "model.roberta.encoder.layer.0.attention.self.key.weight", "model.roberta.encoder.layer.0.attention.self.key.bias", "model.roberta.encoder.layer.0.attention.self.value.weight", "model.roberta.encoder.layer.0.attention.self.value.bias", "model.roberta.encoder.layer.0.attention.output.dense.weight", "model.roberta.encoder.layer.0.attention.output.dense.bias", "model.roberta.encoder.layer.0.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.0.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.0.intermediate.dense.weight", "model.roberta.encoder.layer.0.intermediate.dense.bias", "model.roberta.encoder.layer.0.output.dense.weight", "model.roberta.encoder.layer.0.output.dense.bias", "model.roberta.encoder.layer.0.output.LayerNorm.weight", "model.roberta.encoder.layer.0.output.LayerNorm.bias", "model.roberta.encoder.layer.1.attention.self.query.weight", "model.roberta.encoder.layer.1.attention.self.query.bias", "model.roberta.encoder.layer.1.attention.self.key.weight", "model.roberta.encoder.layer.1.attention.self.key.bias", "model.roberta.encoder.layer.1.attention.self.value.weight", "model.roberta.encoder.layer.1.attention.self.value.bias", "model.roberta.encoder.layer.1.attention.output.dense.weight", "model.roberta.encoder.layer.1.attention.output.dense.bias", "model.roberta.encoder.layer.1.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.1.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.1.intermediate.dense.weight", "model.roberta.encoder.layer.1.intermediate.dense.bias", "model.roberta.encoder.layer.1.output.dense.weight", "model.roberta.encoder.layer.1.output.dense.bias", "model.roberta.encoder.layer.1.output.LayerNorm.weight", "model.roberta.encoder.layer.1.output.LayerNorm.bias", "model.roberta.encoder.layer.2.attention.self.query.weight", "model.roberta.encoder.layer.2.attention.self.query.bias", "model.roberta.encoder.layer.2.attention.self.key.weight", "model.roberta.encoder.layer.2.attention.self.key.bias", "model.roberta.encoder.layer.2.attention.self.value.weight", "model.roberta.encoder.layer.2.attention.self.value.bias", "model.roberta.encoder.layer.2.attention.output.dense.weight", "model.roberta.encoder.layer.2.attention.output.dense.bias", "model.roberta.encoder.layer.2.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.2.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.2.intermediate.dense.weight", "model.roberta.encoder.layer.2.intermediate.dense.bias", "model.roberta.encoder.layer.2.output.dense.weight", "model.roberta.encoder.layer.2.output.dense.bias", "model.roberta.encoder.layer.2.output.LayerNorm.weight", "model.roberta.encoder.layer.2.output.LayerNorm.bias", "model.roberta.encoder.layer.3.attention.self.query.weight", "model.roberta.encoder.layer.3.attention.self.query.bias", "model.roberta.encoder.layer.3.attention.self.key.weight", "model.roberta.encoder.layer.3.attention.self.key.bias", "model.roberta.encoder.layer.3.attention.self.value.weight", "model.roberta.encoder.layer.3.attention.self.value.bias", "model.roberta.encoder.layer.3.attention.output.dense.weight", "model.roberta.encoder.layer.3.attention.output.dense.bias", "model.roberta.encoder.layer.3.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.3.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.3.intermediate.dense.weight", "model.roberta.encoder.layer.3.intermediate.dense.bias", "model.roberta.encoder.layer.3.output.dense.weight", "model.roberta.encoder.layer.3.output.dense.bias", "model.roberta.encoder.layer.3.output.LayerNorm.weight", "model.roberta.encoder.layer.3.output.LayerNorm.bias", "model.roberta.encoder.layer.4.attention.self.query.weight", "model.roberta.encoder.layer.4.attention.self.query.bias", "model.roberta.encoder.layer.4.attention.self.key.weight", "model.roberta.encoder.layer.4.attention.self.key.bias", "model.roberta.encoder.layer.4.attention.self.value.weight", "model.roberta.encoder.layer.4.attention.self.value.bias", "model.roberta.encoder.layer.4.attention.output.dense.weight", "model.roberta.encoder.layer.4.attention.output.dense.bias", "model.roberta.encoder.layer.4.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.4.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.4.intermediate.dense.weight", "model.roberta.encoder.layer.4.intermediate.dense.bias", "model.roberta.encoder.layer.4.output.dense.weight", "model.roberta.encoder.layer.4.output.dense.bias", "model.roberta.encoder.layer.4.output.LayerNorm.weight", "model.roberta.encoder.layer.4.output.LayerNorm.bias", "model.roberta.encoder.layer.5.attention.self.query.weight", "model.roberta.encoder.layer.5.attention.self.query.bias", "model.roberta.encoder.layer.5.attention.self.key.weight", "model.roberta.encoder.layer.5.attention.self.key.bias", "model.roberta.encoder.layer.5.attention.self.value.weight", "model.roberta.encoder.layer.5.attention.self.value.bias", "model.roberta.encoder.layer.5.attention.output.dense.weight", "model.roberta.encoder.layer.5.attention.output.dense.bias", "model.roberta.encoder.layer.5.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.5.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.5.intermediate.dense.weight", "model.roberta.encoder.layer.5.intermediate.dense.bias", "model.roberta.encoder.layer.5.output.dense.weight", "model.roberta.encoder.layer.5.output.dense.bias", "model.roberta.encoder.layer.5.output.LayerNorm.weight", "model.roberta.encoder.layer.5.output.LayerNorm.bias", "model.roberta.encoder.layer.6.attention.self.query.weight", "model.roberta.encoder.layer.6.attention.self.query.bias", "model.roberta.encoder.layer.6.attention.self.key.weight", "model.roberta.encoder.layer.6.attention.self.key.bias", "model.roberta.encoder.layer.6.attention.self.value.weight", "model.roberta.encoder.layer.6.attention.self.value.bias", "model.roberta.encoder.layer.6.attention.output.dense.weight", "model.roberta.encoder.layer.6.attention.output.dense.bias", "model.roberta.encoder.layer.6.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.6.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.6.intermediate.dense.weight", "model.roberta.encoder.layer.6.intermediate.dense.bias", "model.roberta.encoder.layer.6.output.dense.weight", "model.roberta.encoder.layer.6.output.dense.bias", "model.roberta.encoder.layer.6.output.LayerNorm.weight", "model.roberta.encoder.layer.6.output.LayerNorm.bias", "model.roberta.encoder.layer.7.attention.self.query.weight", "model.roberta.encoder.layer.7.attention.self.query.bias", "model.roberta.encoder.layer.7.attention.self.key.weight", "model.roberta.encoder.layer.7.attention.self.key.bias", "model.roberta.encoder.layer.7.attention.self.value.weight", "model.roberta.encoder.layer.7.attention.self.value.bias", "model.roberta.encoder.layer.7.attention.output.dense.weight", "model.roberta.encoder.layer.7.attention.output.dense.bias", "model.roberta.encoder.layer.7.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.7.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.7.intermediate.dense.weight", "model.roberta.encoder.layer.7.intermediate.dense.bias", "model.roberta.encoder.layer.7.output.dense.weight", "model.roberta.encoder.layer.7.output.dense.bias", "model.roberta.encoder.layer.7.output.LayerNorm.weight", "model.roberta.encoder.layer.7.output.LayerNorm.bias", "model.roberta.encoder.layer.8.attention.self.query.weight", "model.roberta.encoder.layer.8.attention.self.query.bias", "model.roberta.encoder.layer.8.attention.self.key.weight", "model.roberta.encoder.layer.8.attention.self.key.bias", "model.roberta.encoder.layer.8.attention.self.value.weight", "model.roberta.encoder.layer.8.attention.self.value.bias", "model.roberta.encoder.layer.8.attention.output.dense.weight", "model.roberta.encoder.layer.8.attention.output.dense.bias", "model.roberta.encoder.layer.8.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.8.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.8.intermediate.dense.weight", "model.roberta.encoder.layer.8.intermediate.dense.bias", "model.roberta.encoder.layer.8.output.dense.weight", "model.roberta.encoder.layer.8.output.dense.bias", "model.roberta.encoder.layer.8.output.LayerNorm.weight", "model.roberta.encoder.layer.8.output.LayerNorm.bias", "model.roberta.encoder.layer.9.attention.self.query.weight", "model.roberta.encoder.layer.9.attention.self.query.bias", "model.roberta.encoder.layer.9.attention.self.key.weight", "model.roberta.encoder.layer.9.attention.self.key.bias", "model.roberta.encoder.layer.9.attention.self.value.weight", "model.roberta.encoder.layer.9.attention.self.value.bias", "model.roberta.encoder.layer.9.attention.output.dense.weight", "model.roberta.encoder.layer.9.attention.output.dense.bias", "model.roberta.encoder.layer.9.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.9.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.9.intermediate.dense.weight", "model.roberta.encoder.layer.9.intermediate.dense.bias", "model.roberta.encoder.layer.9.output.dense.weight", "model.roberta.encoder.layer.9.output.dense.bias", "model.roberta.encoder.layer.9.output.LayerNorm.weight", "model.roberta.encoder.layer.9.output.LayerNorm.bias", "model.roberta.encoder.layer.10.attention.self.query.weight", "model.roberta.encoder.layer.10.attention.self.query.bias", "model.roberta.encoder.layer.10.attention.self.key.weight", "model.roberta.encoder.layer.10.attention.self.key.bias", "model.roberta.encoder.layer.10.attention.self.value.weight", "model.roberta.encoder.layer.10.attention.self.value.bias", "model.roberta.encoder.layer.10.attention.output.dense.weight", "model.roberta.encoder.layer.10.attention.output.dense.bias", "model.roberta.encoder.layer.10.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.10.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.10.intermediate.dense.weight", "model.roberta.encoder.layer.10.intermediate.dense.bias", "model.roberta.encoder.layer.10.output.dense.weight", "model.roberta.encoder.layer.10.output.dense.bias", "model.roberta.encoder.layer.10.output.LayerNorm.weight", "model.roberta.encoder.layer.10.output.LayerNorm.bias", "model.roberta.encoder.layer.11.attention.self.query.weight", "model.roberta.encoder.layer.11.attention.self.query.bias", "model.roberta.encoder.layer.11.attention.self.key.weight", "model.roberta.encoder.layer.11.attention.self.key.bias", "model.roberta.encoder.layer.11.attention.self.value.weight", "model.roberta.encoder.layer.11.attention.self.value.bias", "model.roberta.encoder.layer.11.attention.output.dense.weight", "model.roberta.encoder.layer.11.attention.output.dense.bias", "model.roberta.encoder.layer.11.attention.output.LayerNorm.weight", "model.roberta.encoder.layer.11.attention.output.LayerNorm.bias", "model.roberta.encoder.layer.11.intermediate.dense.weight", "model.roberta.encoder.layer.11.intermediate.dense.bias", "model.roberta.encoder.layer.11.output.dense.weight", "model.roberta.encoder.layer.11.output.dense.bias", "model.roberta.encoder.layer.11.output.LayerNorm.weight", "model.roberta.encoder.layer.11.output.LayerNorm.bias", "model.classifier.dense.weight", "model.classifier.dense.bias", "model.classifier.out_proj.weight", "model.classifier.out_proj.bias". ```