allenai / scibert

A BERT model for scientific text.
https://arxiv.org/abs/1903.10676
Apache License 2.0
1.47k stars 214 forks source link

How to load the scibert model from local disk correctly ? #119

Open omar-araboghli opened 2 years ago

omar-araboghli commented 2 years ago

Hi,

loading the model using

from transformers import AutoModel
model = AutoModel.from_pretrained('path/to/scibert_scivocab_uncased/directory')

doesn't work appropriately. When starting to train the fine-tuned model, the following log shows up:

Some weights of the model checkpoint at ./scibert_scivocab_uncased were not used when initializing IBertModel: ['bert.embeddings.word_embeddings.weight', 'bert.encoder.layer.9.attention.self.query.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.8.intermediate.dense.weight', 'bert.encoder.layer.7.attention.self.query.weight', 'bert.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.layer.9.attention.self.query.bias', 'bert.encoder.layer.8.attention.self.key.weight', 'bert.encoder.layer.5.attention.self.query.bias', 'bert.encoder.layer.6.output.dense.weight', 'bert.encoder.layer.10.output.dense.bias', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.output.dense.weight', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.6.attention.self.key.bias', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.8.attention.self.query.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.layer.11.intermediate.dense.weight', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.dense.bias', 'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.3.attention.self.query.bias', 'cls.predictions.transform.LayerNorm.weight', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.self.key.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.output.dense.bias', 'cls.predictions.decoder.weight', 'bert.encoder.layer.8.attention.self.value.bias', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.9.output.dense.bias', 'bert.encoder.layer.2.intermediate.dense.bias', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.3.attention.self.value.weight', 'cls.predictions.transform.dense.weight', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.7.attention.self.value.bias', 'bert.encoder.layer.8.attention.self.value.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.self.key.bias', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.self.query.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.self.key.weight', 'bert.encoder.layer.1.attention.self.query.bias', 'bert.pooler.dense.weight', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.layer.1.attention.self.value.bias', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.5.intermediate.dense.bias', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.dense.bias', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.7.output.dense.bias', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.11.output.dense.bias', 'bert.encoder.layer.10.intermediate.dense.bias', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.3.output.dense.bias', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.self.key.bias', 'cls.predictions.transform.dense.bias', 'bert.encoder.layer.6.attention.self.query.bias', 'bert.encoder.layer.10.attention.self.value.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.self.value.bias', 'bert.encoder.layer.10.attention.self.key.bias', 'bert.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.encoder.layer.7.attention.output.dense.bias', 'bert.encoder.layer.10.attention.self.query.bias', 'bert.encoder.layer.9.attention.self.value.bias', 'bert.embeddings.token_type_embeddings.weight', 'bert.encoder.layer.11.attention.self.query.bias', 'bert.encoder.layer.10.attention.output.dense.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.8.attention.self.query.bias', 'bert.encoder.layer.4.attention.self.key.bias', 'bert.encoder.layer.9.attention.output.dense.bias', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.self.key.bias', 'bert.encoder.layer.5.output.dense.bias', 'bert.encoder.layer.4.attention.self.query.bias', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.layer.2.attention.output.dense.bias', 'bert.encoder.layer.11.attention.self.value.bias', 'cls.predictions.bias', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.output.dense.bias', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.1.output.dense.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.self.query.weight', 'bert.encoder.layer.8.output.dense.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.self.value.weight', 'bert.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.layer.4.output.dense.weight', 'bert.encoder.layer.10.output.dense.weight', 'bert.pooler.dense.bias', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.self.query.weight', 'cls.seq_relationship.weight', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.attention.self.key.bias', 'bert.encoder.layer.1.attention.self.key.bias', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.5.attention.self.value.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.layer.11.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.self.key.weight', 'bert.encoder.layer.5.attention.output.dense.bias', 'bert.encoder.layer.8.intermediate.dense.bias', 'bert.encoder.layer.11.intermediate.dense.bias', 'bert.encoder.layer.8.attention.output.dense.bias', 'bert.embeddings.LayerNorm.weight', 'bert.embeddings.position_embeddings.weight', 'bert.encoder.layer.3.attention.self.key.bias', 'bert.encoder.layer.10.attention.self.key.weight', 'bert.encoder.layer.4.attention.self.value.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.output.dense.bias', 'bert.encoder.layer.7.attention.self.query.bias', 'bert.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.layer.11.attention.self.key.weight', 'cls.seq_relationship.bias', 'bert.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.self.value.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.self.key.weight', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.5.attention.self.value.weight', 'cls.predictions.transform.LayerNorm.bias', 'bert.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.output.dense.bias', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.4.attention.self.value.bias', 'bert.encoder.layer.11.output.dense.weight', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.2.output.dense.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.self.key.weight', 'bert.encoder.layer.6.attention.self.value.bias', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.self.key.bias', 'bert.encoder.layer.4.attention.self.query.weight']
- This IS expected if you are initializing IBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing IBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of IBertModel were not initialized from the model checkpoint at ./scibert_scivocab_uncased and are newly initialized: ['encoder.layer.10.intermediate.dense.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.5.intermediate.dense.fc_scaling_factor', 'encoder.layer.10.attention.self.key_activation.x_min', 'encoder.layer.7.output.ln_input_act.act_scaling_factor', 'encoder.layer.7.output.ln_input_act.x_max', 'encoder.layer.4.output.output_activation.x_min', 'encoder.layer.4.attention.output.dense.bias_integer', 'encoder.layer.11.intermediate.dense.weight_integer', 'encoder.layer.6.attention.self.key.weight_integer', 'encoder.layer.6.attention.output.ln_input_act.x_max', 'encoder.layer.9.attention.self.query_activation.x_max', 'encoder.layer.5.attention.output.LayerNorm.shift', 'encoder.layer.9.output.LayerNorm.activation.x_max', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.6.attention.output.dense.weight_integer', 'embeddings.word_embeddings.weight_scaling_factor', 'encoder.layer.0.attention.self.softmax.act.x_max', 'encoder.layer.6.intermediate.output_activation.act_scaling_factor', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.10.intermediate.dense.fc_scaling_factor', 'encoder.layer.1.pre_output_act.x_min', 'encoder.layer.4.attention.self.output_activation.x_max', 'encoder.layer.0.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.shift', 'encoder.layer.6.attention.self.value_activation.act_scaling_factor', 'encoder.layer.7.output.dense.bias_integer', 'encoder.layer.11.intermediate.dense.bias_integer', 'encoder.layer.4.attention.self.value_activation.act_scaling_factor', 'encoder.layer.1.attention.self.query.weight_integer', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.11.intermediate.output_activation.x_max', 'encoder.layer.11.output.dense.bias_integer', 'encoder.layer.3.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.7.attention.self.query_activation.x_min', 'encoder.layer.2.attention.self.query_activation.x_max', 'encoder.layer.4.output.dense.fc_scaling_factor', 'encoder.layer.0.attention.self.output_activation.x_min', 'encoder.layer.6.attention.self.query.fc_scaling_factor', 'encoder.layer.3.pre_output_act.x_max', 'encoder.layer.10.output.output_activation.act_scaling_factor', 'encoder.layer.0.attention.self.key_activation.act_scaling_factor', 'encoder.layer.4.output.LayerNorm.shift', 'encoder.layer.4.attention.self.softmax.act.x_min', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.ln_input_act.x_max', 'encoder.layer.4.attention.self.key.bias_integer', 'encoder.layer.0.output.output_activation.x_min', 'encoder.layer.9.pre_output_act.act_scaling_factor', 'encoder.layer.11.attention.self.output_activation.act_scaling_factor', 'encoder.layer.5.attention.output.ln_input_act.x_min', 'encoder.layer.1.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.self.value.weight_integer', 'encoder.layer.6.output.ln_input_act.act_scaling_factor', 'encoder.layer.7.attention.self.value_activation.act_scaling_factor', 'encoder.layer.11.output.ln_input_act.act_scaling_factor', 'encoder.layer.9.attention.self.output_activation.x_min', 'encoder.layer.9.attention.output.dense.fc_scaling_factor', 'encoder.layer.3.intermediate.output_activation.x_max', 'encoder.layer.2.pre_output_act.x_min', 'encoder.layer.0.pre_intermediate_act.act_scaling_factor', 'encoder.layer.10.attention.self.output_activation.x_min', 'encoder.layer.5.attention.self.key_activation.act_scaling_factor', 'encoder.layer.10.attention.output.output_activation.x_max', 'encoder.layer.11.attention.self.query_activation.act_scaling_factor', 'encoder.layer.4.output.ln_input_act.x_max', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.11.intermediate.output_activation.act_scaling_factor', 'encoder.layer.3.intermediate.output_activation.x_min', 'encoder.layer.11.attention.self.key.bias_integer', 'encoder.layer.5.attention.self.query_activation.x_max', 'encoder.layer.1.attention.self.key.bias_integer', 'encoder.layer.11.attention.self.query_activation.x_min', 'encoder.layer.1.attention.output.dense.fc_scaling_factor', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.2.attention.self.key.weight_integer', 'encoder.layer.5.attention.self.value.bias_integer', 'encoder.layer.11.attention.output.ln_input_act.act_scaling_factor', 'embeddings.LayerNorm.bias', 'encoder.layer.2.pre_output_act.act_scaling_factor', 'encoder.layer.6.attention.self.key_activation.x_max', 'encoder.layer.2.intermediate.dense.weight_integer', 'encoder.layer.6.intermediate.dense.bias_integer', 'encoder.layer.7.pre_intermediate_act.x_max', 'encoder.layer.11.output.LayerNorm.activation.x_max', 'encoder.layer.7.attention.self.query_activation.x_max', 'encoder.layer.8.attention.self.value.weight_integer', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.1.attention.self.query_activation.x_min', 'encoder.layer.3.output.output_activation.x_min', 'encoder.layer.7.pre_output_act.x_min', 'encoder.layer.11.output.LayerNorm.bias', 'encoder.layer.7.output.dense.fc_scaling_factor', 'encoder.layer.7.attention.self.output_activation.x_max', 'encoder.layer.7.attention.self.softmax.act.act_scaling_factor', 'embeddings.position_embeddings.weight_integer', 'encoder.layer.2.pre_intermediate_act.x_min', 'encoder.layer.0.output.LayerNorm.activation.x_min', 'encoder.layer.11.pre_intermediate_act.x_min', 'encoder.layer.7.attention.self.query.bias_integer', 'encoder.layer.6.intermediate.dense.weight_integer', 'encoder.layer.3.output.LayerNorm.activation.x_max', 'encoder.layer.5.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.shift', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.8.output.dense.bias', 'embeddings.output_activation.act_scaling_factor', 'encoder.layer.4.attention.self.softmax.act.x_max', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.4.output.dense.bias_integer', 'encoder.layer.1.output.dense.fc_scaling_factor', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.0.output.LayerNorm.shift', 'encoder.layer.1.pre_output_act.act_scaling_factor', 'encoder.layer.1.pre_intermediate_act.act_scaling_factor', 'encoder.layer.5.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.7.attention.self.output_activation.act_scaling_factor', 'embeddings.token_type_embeddings.weight_integer', 'encoder.layer.10.attention.output.LayerNorm.activation.x_max', 'encoder.layer.5.intermediate.output_activation.x_min', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.3.attention.self.key.bias_integer', 'encoder.layer.9.output.LayerNorm.shift', 'encoder.layer.2.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.4.output.dense.bias', 'encoder.layer.8.attention.output.dense.fc_scaling_factor', 'encoder.layer.8.output.dense.bias_integer', 'encoder.layer.10.intermediate.output_activation.x_max', 'encoder.layer.0.attention.self.value_activation.x_min', 'encoder.layer.6.output.dense.weight_integer', 'encoder.layer.1.output.LayerNorm.activation.x_min', 'encoder.layer.2.attention.self.query_activation.act_scaling_factor', 'encoder.layer.11.attention.output.output_activation.x_min', 'encoder.layer.1.attention.self.value_activation.x_min', 'encoder.layer.4.intermediate.dense.fc_scaling_factor', 'encoder.layer.3.attention.self.softmax.act.x_min', 'encoder.layer.8.attention.self.key.weight_integer', 'encoder.layer.11.attention.output.LayerNorm.shift', 'encoder.layer.7.attention.self.value.fc_scaling_factor', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.10.attention.self.query_activation.x_min', 'encoder.layer.10.attention.self.query.weight_integer', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.5.attention.output.LayerNorm.activation.x_max', 'encoder.layer.11.attention.self.key.bias', 'pooler.dense.bias', 'encoder.layer.0.attention.output.ln_input_act.x_max', 'encoder.layer.9.attention.self.value_activation.act_scaling_factor', 'encoder.layer.10.intermediate.dense.bias_integer', 'encoder.layer.9.intermediate.dense.weight_integer', 'encoder.layer.2.attention.self.softmax.act.x_max', 'encoder.layer.3.attention.self.key_activation.x_min', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.11.pre_output_act.x_min', 'encoder.layer.7.attention.self.output_activation.x_min', 'encoder.layer.9.output.ln_input_act.act_scaling_factor', 'encoder.layer.6.attention.self.query_activation.x_min', 'encoder.layer.3.attention.self.value_activation.x_min', 'encoder.layer.1.attention.output.ln_input_act.x_max', 'encoder.layer.2.attention.self.query_activation.x_min', 'encoder.layer.7.attention.output.ln_input_act.x_max', 'encoder.layer.9.pre_intermediate_act.act_scaling_factor', 'encoder.layer.4.output.dense.weight', 'encoder.layer.5.attention.output.LayerNorm.activation.x_min', 'encoder.layer.8.attention.output.dense.bias_integer', 'encoder.layer.7.attention.self.softmax.act.x_min', 'encoder.layer.10.attention.self.query.bias_integer', 'encoder.layer.6.output.ln_input_act.x_max', 'encoder.layer.1.attention.output.output_activation.act_scaling_factor', 'encoder.layer.8.intermediate.dense.weight_integer', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.pre_intermediate_act.act_scaling_factor', 'encoder.layer.9.attention.output.output_activation.x_min', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.self.value.weight_integer', 'pooler.dense.weight', 'encoder.layer.7.attention.self.query.weight_integer', 'encoder.layer.0.attention.self.query.weight_integer', 'encoder.layer.3.attention.self.output_activation.act_scaling_factor', 'embeddings.embeddings_act2.x_max', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.4.attention.self.value_activation.x_min', 'encoder.layer.0.intermediate.output_activation.x_min', 'encoder.layer.8.attention.self.query_activation.act_scaling_factor', 'encoder.layer.3.intermediate.dense.weight_integer', 'encoder.layer.5.attention.output.output_activation.act_scaling_factor', 'encoder.layer.7.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.5.output.ln_input_act.act_scaling_factor', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.7.attention.self.value.bias_integer', 'encoder.layer.5.pre_output_act.x_max', 'encoder.layer.8.pre_intermediate_act.act_scaling_factor', 'encoder.layer.10.pre_intermediate_act.x_min', 'encoder.layer.4.intermediate.dense.bias_integer', 'embeddings.token_type_embeddings.weight_scaling_factor', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.activation.x_max', 'encoder.layer.9.pre_intermediate_act.x_max', 'encoder.layer.1.attention.output.output_activation.x_min', 'encoder.layer.9.output.ln_input_act.x_max', 'encoder.layer.10.output.LayerNorm.shift', 'encoder.layer.8.attention.self.key_activation.x_max', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.8.pre_intermediate_act.x_max', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.9.attention.self.query.bias_integer', 'encoder.layer.10.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.8.attention.output.output_activation.x_max', 'encoder.layer.4.intermediate.dense.weight_integer', 'encoder.layer.11.attention.self.softmax.act.x_max', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.2.output.dense.fc_scaling_factor', 'encoder.layer.1.intermediate.dense.weight_integer', 'encoder.layer.8.output.LayerNorm.activation.x_min', 'encoder.layer.2.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.7.attention.self.key_activation.x_max', 'encoder.layer.1.intermediate.output_activation.act_scaling_factor', 'encoder.layer.2.attention.self.output_activation.x_max', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.5.output.dense.bias_integer', 'encoder.layer.0.attention.output.LayerNorm.shift', 'encoder.layer.11.attention.self.value_activation.act_scaling_factor', 'encoder.layer.4.output.output_activation.x_max', 'encoder.layer.0.attention.self.key_activation.x_max', 'encoder.layer.5.attention.output.dense.fc_scaling_factor', 'encoder.layer.5.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.7.pre_output_act.x_max', 'encoder.layer.2.attention.self.output_activation.x_min', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.1.output.ln_input_act.x_min', 'encoder.layer.5.attention.self.output_activation.x_max', 'encoder.layer.6.attention.self.softmax.act.x_min', 'encoder.layer.11.attention.self.key.fc_scaling_factor', 'encoder.layer.10.output.dense.fc_scaling_factor', 'encoder.layer.7.attention.self.key_activation.x_min', 'encoder.layer.10.attention.self.key_activation.act_scaling_factor', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.10.attention.self.query.fc_scaling_factor', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.8.attention.output.output_activation.x_min', 'encoder.layer.7.attention.output.dense.fc_scaling_factor', 'encoder.layer.7.attention.self.query_activation.act_scaling_factor', 'encoder.layer.6.attention.self.query.bias_integer', 'encoder.layer.3.attention.self.query.weight_integer', 'encoder.layer.2.attention.self.query.fc_scaling_factor', 'encoder.layer.8.attention.output.LayerNorm.activation.x_min', 'encoder.layer.0.attention.self.output_activation.act_scaling_factor', 'encoder.layer.9.intermediate.output_activation.x_min', 'encoder.layer.5.pre_output_act.x_min', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.6.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.1.attention.self.query.fc_scaling_factor', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.self.query_activation.act_scaling_factor', 'encoder.layer.2.output.LayerNorm.activation.x_min', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.self.query_activation.act_scaling_factor', 'encoder.layer.4.pre_intermediate_act.x_min', 'encoder.layer.8.output.ln_input_act.x_max', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.activation.x_min', 'embeddings.LayerNorm.shift', 'encoder.layer.0.output.dense.fc_scaling_factor', 'encoder.layer.3.output.output_activation.act_scaling_factor', 'encoder.layer.10.attention.self.query_activation.x_max', 'encoder.layer.11.attention.self.key_activation.x_min', 'encoder.layer.5.attention.self.query.bias_integer', 'encoder.layer.11.attention.output.dense.weight_integer', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.10.attention.self.softmax.act.x_min', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.1.intermediate.dense.bias_integer', 'encoder.layer.2.output.dense.bias_integer', 'encoder.layer.8.attention.output.LayerNorm.shift', 'encoder.layer.2.output.LayerNorm.shift', 'encoder.layer.8.pre_output_act.act_scaling_factor', 'encoder.layer.9.attention.output.LayerNorm.activation.x_min', 'encoder.layer.0.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.8.pre_intermediate_act.x_min', 'encoder.layer.10.attention.self.output_activation.x_max', 'encoder.layer.11.pre_intermediate_act.act_scaling_factor', 'encoder.layer.6.pre_output_act.x_min', 'encoder.layer.4.output.ln_input_act.act_scaling_factor', 'encoder.layer.10.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.0.output.dense.bias_integer', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.4.attention.self.key_activation.act_scaling_factor', 'encoder.layer.9.attention.self.query.weight_integer', 'encoder.layer.2.attention.self.output_activation.act_scaling_factor', 'encoder.layer.10.output.dense.bias_integer', 'encoder.layer.8.attention.self.query.bias_integer', 'encoder.layer.3.attention.self.value.bias_integer', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.1.output.LayerNorm.shift', 'encoder.layer.1.intermediate.output_activation.x_max', 'encoder.layer.2.attention.self.key_activation.x_min', 'encoder.layer.10.intermediate.output_activation.x_min', 'encoder.layer.0.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.2.attention.output.LayerNorm.activation.x_max', 'encoder.layer.4.intermediate.output_activation.act_scaling_factor', 'encoder.layer.9.attention.self.value.weight_integer', 'encoder.layer.11.attention.self.value_activation.x_max', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.3.output.dense.fc_scaling_factor', 'encoder.layer.7.output.output_activation.x_max', 'encoder.layer.4.output.ln_input_act.x_min', 'encoder.layer.3.attention.self.value_activation.x_max', 'encoder.layer.1.attention.output.LayerNorm.activation.x_min', 'encoder.layer.0.intermediate.dense.bias_integer', 'embeddings.token_type_embeddings.weight', 'encoder.layer.8.attention.self.softmax.act.x_min', 'encoder.layer.9.attention.output.LayerNorm.shift', 'encoder.layer.9.pre_output_act.x_max', 'encoder.layer.2.attention.self.key.fc_scaling_factor', 'encoder.layer.10.attention.output.dense.bias_integer', 'encoder.layer.2.output.output_activation.act_scaling_factor', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.5.output.output_activation.x_max', 'encoder.layer.6.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.10.attention.self.output_activation.act_scaling_factor', 'encoder.layer.2.output.output_activation.x_min', 'encoder.layer.0.output.output_activation.act_scaling_factor', 'encoder.layer.1.output.ln_input_act.x_max', 'encoder.layer.1.attention.self.value.bias_integer', 'encoder.layer.11.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.2.output.dense.weight_integer', 'encoder.layer.4.attention.self.key_activation.x_min', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.3.attention.self.key.fc_scaling_factor', 'encoder.layer.10.attention.output.dense.weight_integer', 'encoder.layer.4.attention.self.query.weight_integer', 'encoder.layer.8.output.ln_input_act.x_min', 'encoder.layer.0.pre_output_act.x_max', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.pre_intermediate_act.x_max', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.11.attention.self.value.bias_integer', 'encoder.layer.2.attention.self.value.fc_scaling_factor', 'encoder.layer.11.output.LayerNorm.shift', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.0.attention.self.query_activation.x_min', 'encoder.layer.0.output.dense.weight', 'encoder.layer.9.attention.output.output_activation.act_scaling_factor', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.8.intermediate.output_activation.x_max', 'embeddings.output_activation.x_min', 'encoder.layer.4.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.6.attention.output.ln_input_act.x_min', 'encoder.layer.9.attention.self.value_activation.x_min', 'encoder.layer.9.intermediate.output_activation.x_max', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.5.intermediate.dense.bias_integer', 'encoder.layer.2.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.10.output.ln_input_act.act_scaling_factor', 'encoder.layer.1.attention.self.key_activation.act_scaling_factor', 'encoder.layer.0.output.dense.weight_integer', 'encoder.layer.7.output.LayerNorm.shift', 'encoder.layer.10.intermediate.output_activation.act_scaling_factor', 'encoder.layer.3.output.dense.weight', 'encoder.layer.6.attention.output.LayerNorm.activation.x_max', 'encoder.layer.7.output.dense.weight_integer', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.output.ln_input_act.x_min', 'encoder.layer.6.attention.output.output_activation.x_min', 'encoder.layer.7.output.output_activation.act_scaling_factor', 'encoder.layer.9.attention.self.value.fc_scaling_factor', 'encoder.layer.3.output.dense.bias_integer', 'encoder.layer.6.pre_output_act.act_scaling_factor', 'encoder.layer.0.attention.self.softmax.act.x_min', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.11.attention.output.dense.fc_scaling_factor', 'encoder.layer.0.attention.output.ln_input_act.x_min', 'encoder.layer.2.attention.output.LayerNorm.shift', 'embeddings.LayerNorm.weight', 'encoder.layer.3.attention.self.query.fc_scaling_factor', 'encoder.layer.6.attention.self.key_activation.x_min', 'encoder.layer.7.output.LayerNorm.activation.x_max', 'encoder.layer.7.intermediate.dense.fc_scaling_factor', 'encoder.layer.3.attention.self.softmax.act.x_max', 'encoder.layer.8.pre_output_act.x_max', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.5.pre_intermediate_act.x_min', 'encoder.layer.2.output.ln_input_act.x_max', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.11.output.output_activation.x_max', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.9.output.output_activation.x_max', 'encoder.layer.4.attention.output.dense.fc_scaling_factor', 'encoder.layer.7.attention.output.output_activation.x_max', 'encoder.layer.1.attention.self.query_activation.act_scaling_factor', 'encoder.layer.2.intermediate.dense.fc_scaling_factor', 'encoder.layer.0.attention.self.value.weight_integer', 'encoder.layer.7.attention.self.key.bias_integer', 'encoder.layer.3.attention.self.value.fc_scaling_factor', 'encoder.layer.4.attention.self.value.weight_integer', 'encoder.layer.1.attention.output.LayerNorm.activation.x_max', 'encoder.layer.3.attention.self.key.weight_integer', 'encoder.layer.6.attention.self.query_activation.act_scaling_factor', 'encoder.layer.9.intermediate.dense.fc_scaling_factor', 'encoder.layer.11.attention.output.LayerNorm.activation.x_max', 'encoder.layer.3.output.ln_input_act.x_max', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.1.attention.self.query.bias_integer', 'encoder.layer.1.intermediate.dense.fc_scaling_factor', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.9.output.dense.weight', 'encoder.layer.6.attention.self.softmax.act.x_max', 'encoder.layer.1.attention.self.value.fc_scaling_factor', 'encoder.layer.3.attention.output.ln_input_act.x_max', 'encoder.layer.3.output.LayerNorm.activation.x_min', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.8.attention.self.query_activation.x_max', 'encoder.layer.5.output.LayerNorm.shift', 'encoder.layer.2.output.dense.weight', 'encoder.layer.0.attention.output.LayerNorm.activation.x_max', 'encoder.layer.3.attention.output.dense.weight_integer', 'encoder.layer.7.attention.self.key_activation.act_scaling_factor', 'encoder.layer.8.attention.self.query.fc_scaling_factor', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias_integer', 'encoder.layer.9.pre_intermediate_act.x_min', 'encoder.layer.11.attention.self.query.weight_integer', 'encoder.layer.2.intermediate.dense.bias_integer', 'encoder.layer.4.attention.self.value_activation.x_max', 'encoder.layer.8.attention.output.dense.weight_integer', 'encoder.layer.11.output.dense.weight_integer', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.11.intermediate.output_activation.x_min', 'encoder.layer.3.attention.output.LayerNorm.activation.x_max', 'encoder.layer.7.attention.output.dense.bias_integer', 'encoder.layer.4.output.output_activation.act_scaling_factor', 'encoder.layer.2.intermediate.output_activation.act_scaling_factor', 'embeddings.position_embeddings.weight_scaling_factor', 'encoder.layer.5.attention.self.query_activation.x_min', 'encoder.layer.7.attention.self.softmax.act.x_max', 'encoder.layer.11.pre_intermediate_act.x_max', 'encoder.layer.5.attention.self.query.weight_integer', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.5.attention.self.softmax.act.x_max', 'encoder.layer.10.attention.output.output_activation.act_scaling_factor', 'encoder.layer.11.pre_output_act.act_scaling_factor', 'encoder.layer.4.attention.output.output_activation.x_max', 'encoder.layer.0.attention.self.key.weight_integer', 'encoder.layer.11.output.output_activation.x_min', 'encoder.layer.4.pre_output_act.x_min', 'encoder.layer.10.attention.self.key.weight_integer', 'encoder.layer.7.intermediate.dense.weight_integer', 'encoder.layer.6.output.LayerNorm.activation.x_min', 'encoder.layer.5.attention.self.key.weight_integer', 'encoder.layer.6.output.output_activation.act_scaling_factor', 'encoder.layer.9.attention.output.LayerNorm.activation.x_max', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.2.attention.output.output_activation.x_min', 'encoder.layer.5.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.3.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.6.pre_intermediate_act.act_scaling_factor', 'encoder.layer.3.attention.self.key_activation.x_max', 'encoder.layer.6.attention.output.LayerNorm.activation.x_min', 'encoder.layer.6.attention.self.key.fc_scaling_factor', 'encoder.layer.3.attention.self.output_activation.x_max', 'encoder.layer.11.attention.output.output_activation.x_max', 'encoder.layer.11.attention.self.query.bias_integer', 'encoder.layer.6.attention.self.key_activation.act_scaling_factor', 'encoder.layer.9.attention.self.key_activation.x_max', 'encoder.layer.8.attention.output.ln_input_act.x_max', 'encoder.layer.8.attention.self.key_activation.act_scaling_factor', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.5.intermediate.output_activation.x_max', 'encoder.layer.5.output.LayerNorm.activation.x_max', 'encoder.layer.8.intermediate.dense.fc_scaling_factor', 'encoder.layer.3.pre_intermediate_act.act_scaling_factor', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.6.pre_intermediate_act.x_min', 'encoder.layer.1.attention.self.query_activation.x_max', 'encoder.layer.0.attention.output.output_activation.act_scaling_factor', 'encoder.layer.0.intermediate.dense.weight_integer', 'encoder.layer.4.pre_output_act.act_scaling_factor', 'encoder.layer.8.attention.self.output_activation.x_min', 'encoder.layer.1.output.ln_input_act.act_scaling_factor', 'encoder.layer.2.output.ln_input_act.act_scaling_factor', 'encoder.layer.8.intermediate.output_activation.x_min', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.11.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.5.attention.output.ln_input_act.x_max', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.self.key_activation.x_min', 'encoder.layer.6.attention.self.output_activation.act_scaling_factor', 'encoder.layer.4.intermediate.output_activation.x_max', 'encoder.layer.5.attention.self.value.fc_scaling_factor', 'encoder.layer.5.attention.self.key_activation.x_max', 'encoder.layer.10.attention.self.value_activation.x_max', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.6.attention.self.output_activation.x_min', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias_integer', 'encoder.layer.3.intermediate.output_activation.act_scaling_factor', 'encoder.layer.3.output.dense.bias', 'encoder.layer.4.attention.self.query.fc_scaling_factor', 'encoder.layer.11.attention.self.value_activation.x_min', 'encoder.layer.2.attention.output.output_activation.x_max', 'encoder.layer.3.output.output_activation.x_max', 'encoder.layer.3.attention.self.query_activation.act_scaling_factor', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.5.attention.self.value_activation.x_max', 'encoder.layer.10.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.0.attention.self.query.fc_scaling_factor', 'encoder.layer.0.attention.self.query.bias_integer', 'encoder.layer.4.attention.output.LayerNorm.bias', 'embeddings.position_embeddings.weight', 'encoder.layer.5.attention.self.key.fc_scaling_factor', 'encoder.layer.11.attention.output.dense.bias_integer', 'encoder.layer.7.intermediate.dense.bias_integer', 'encoder.layer.6.attention.output.output_activation.x_max', 'encoder.layer.4.attention.self.key.weight_integer', 'encoder.layer.4.attention.self.query_activation.x_min', 'embeddings.embeddings_act2.act_scaling_factor', 'encoder.layer.8.output.LayerNorm.shift', 'encoder.layer.9.attention.self.query.fc_scaling_factor', 'encoder.layer.10.attention.output.ln_input_act.x_max', 'encoder.layer.9.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.7.attention.self.key.fc_scaling_factor', 'encoder.layer.0.pre_output_act.act_scaling_factor', 'encoder.layer.6.output.dense.bias_integer', 'encoder.layer.6.pre_intermediate_act.x_max', 'encoder.layer.4.pre_intermediate_act.act_scaling_factor', 'encoder.layer.5.output.ln_input_act.x_max', 'encoder.layer.7.attention.self.key.weight_integer', 'encoder.layer.0.attention.self.value.fc_scaling_factor', 'encoder.layer.10.attention.output.ln_input_act.x_min', 'encoder.layer.10.attention.output.LayerNorm.activation.x_min', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.5.attention.self.query.fc_scaling_factor', 'encoder.layer.8.attention.self.value_activation.x_min', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.2.attention.output.output_activation.act_scaling_factor', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.10.attention.self.value.bias_integer', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.10.pre_output_act.act_scaling_factor', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.fc_scaling_factor', 'encoder.layer.11.output.output_activation.act_scaling_factor', 'encoder.layer.7.output.output_activation.x_min', 'encoder.layer.0.intermediate.output_activation.act_scaling_factor', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.4.attention.self.key_activation.x_max', 'embeddings.word_embeddings.weight', 'encoder.layer.9.attention.output.output_activation.x_max', 'encoder.layer.5.output.output_activation.x_min', 'encoder.layer.2.pre_output_act.x_max', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.11.output.ln_input_act.x_min', 'encoder.layer.2.attention.self.key_activation.act_scaling_factor', 'encoder.layer.8.attention.self.key.weight', 'embeddings.LayerNorm.activation.x_max', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.5.intermediate.dense.weight_integer', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.9.attention.self.value_activation.x_max', 'encoder.layer.7.attention.output.LayerNorm.bias', 'embeddings.embeddings_act2.x_min', 'encoder.layer.5.attention.output.output_activation.x_max', 'encoder.layer.6.pre_output_act.x_max', 'encoder.layer.7.attention.output.LayerNorm.shift', 'encoder.layer.9.output.output_activation.act_scaling_factor', 'encoder.layer.11.attention.self.key.weight_integer', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.8.intermediate.output_activation.act_scaling_factor', 'encoder.layer.11.attention.self.query_activation.x_max', 'encoder.layer.3.attention.output.output_activation.x_max', 'encoder.layer.6.attention.output.dense.fc_scaling_factor', 'encoder.layer.1.output.output_activation.x_max', 'embeddings.embeddings_act1.act_scaling_factor', 'encoder.layer.10.attention.self.value.weight_integer', 'encoder.layer.7.attention.output.output_activation.x_min', 'encoder.layer.7.pre_output_act.act_scaling_factor', 'encoder.layer.7.pre_intermediate_act.x_min', 'encoder.layer.9.output.LayerNorm.activation.x_min', 'embeddings.LayerNorm.activation.x_min', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.11.output.ln_input_act.x_max', 'encoder.layer.10.attention.self.softmax.act.x_max', 'encoder.layer.10.attention.output.output_activation.x_min', 'encoder.layer.3.pre_output_act.act_scaling_factor', 'encoder.layer.2.attention.self.value_activation.x_max', 'encoder.layer.2.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.6.attention.self.value_activation.x_min', 'encoder.layer.6.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.10.attention.self.key.bias_integer', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.0.output.LayerNorm.activation.x_max', 'encoder.layer.2.output.output_activation.x_max', 'encoder.layer.8.intermediate.dense.bias_integer', 'encoder.layer.10.output.LayerNorm.activation.x_min', 'encoder.layer.2.attention.self.query.weight_integer', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.2.attention.self.value_activation.act_scaling_factor', 'encoder.layer.8.attention.output.ln_input_act.x_min', 'encoder.layer.8.output.output_activation.x_min', 'encoder.layer.11.attention.output.LayerNorm.activation.x_min', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.attention.self.output_activation.x_max', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.7.pre_intermediate_act.act_scaling_factor', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.6.attention.self.value_activation.x_max', 'encoder.layer.2.pre_intermediate_act.act_scaling_factor', 'encoder.layer.3.intermediate.dense.fc_scaling_factor', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.1.attention.self.key.weight_integer', 'encoder.layer.0.attention.self.value_activation.x_max', 'encoder.layer.9.attention.self.value.bias_integer', 'encoder.layer.7.attention.self.value_activation.x_min', 'encoder.layer.7.attention.output.dense.weight_integer', 'encoder.layer.8.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.9.attention.self.key.bias_integer', 'encoder.layer.11.attention.self.output_activation.x_min', 'encoder.layer.4.output.LayerNorm.activation.x_max', 'encoder.layer.11.attention.self.value.fc_scaling_factor', 'encoder.layer.1.pre_output_act.x_max', 'encoder.layer.3.attention.self.query_activation.x_max', 'encoder.layer.6.intermediate.dense.fc_scaling_factor', 'encoder.layer.1.output.output_activation.x_min', 'encoder.layer.5.pre_intermediate_act.x_max', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.4.attention.self.key.fc_scaling_factor', 'encoder.layer.5.attention.self.value.weight_integer', 'encoder.layer.9.output.dense.weight_integer', 'encoder.layer.8.attention.output.output_activation.act_scaling_factor', 'encoder.layer.6.intermediate.output_activation.x_min', 'encoder.layer.7.output.dense.weight', 'encoder.layer.8.attention.self.value_activation.act_scaling_factor', 'encoder.layer.1.output.dense.bias', 'encoder.layer.3.output.dense.weight_integer', 'encoder.layer.5.output.output_activation.act_scaling_factor', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.10.output.LayerNorm.activation.x_max', 'encoder.layer.11.attention.output.output_activation.act_scaling_factor', 'encoder.layer.5.output.LayerNorm.activation.x_min', 'encoder.layer.3.intermediate.dense.bias_integer', 'encoder.layer.9.output.dense.bias_integer', 'encoder.layer.6.intermediate.output_activation.x_max', 'encoder.layer.4.attention.self.query.bias_integer', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.self.key.fc_scaling_factor', 'encoder.layer.0.attention.self.key.bias_integer', 'encoder.layer.1.pre_intermediate_act.x_min', 'encoder.layer.9.attention.self.output_activation.x_max', 'encoder.layer.11.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.7.attention.output.ln_input_act.x_min', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.11.pre_output_act.x_max', 'encoder.layer.1.attention.self.softmax.act.x_max', 'encoder.layer.2.attention.self.value.bias_integer', 'encoder.layer.3.output.ln_input_act.x_min', 'encoder.layer.4.attention.self.output_activation.x_min', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.0.attention.output.LayerNorm.activation.x_min', 'encoder.layer.0.output.output_activation.x_max', 'encoder.layer.9.attention.output.dense.weight_integer', 'encoder.layer.5.attention.self.value_activation.act_scaling_factor', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.0.attention.self.key.fc_scaling_factor', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.9.output.output_activation.x_min', 'encoder.layer.8.attention.self.key.bias_integer', 'encoder.layer.0.attention.self.value.bias_integer', 'embeddings.word_embeddings.weight_integer', 'encoder.layer.9.attention.output.dense.bias_integer', 'encoder.layer.5.output.dense.weight_integer', 'encoder.layer.9.attention.self.key_activation.x_min', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.7.output.LayerNorm.activation.x_min', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.output_activation.act_scaling_factor', 'encoder.layer.9.attention.self.softmax.act.x_max', 'encoder.layer.4.attention.output.output_activation.x_min', 'encoder.layer.3.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.9.pre_output_act.x_min', 'encoder.layer.10.pre_output_act.x_min', 'encoder.layer.4.attention.output.dense.weight_integer', 'encoder.layer.8.attention.output.LayerNorm.activation.x_max', 'encoder.layer.10.pre_output_act.x_max', 'encoder.layer.6.attention.output.LayerNorm.shift', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.8.attention.self.value.fc_scaling_factor', 'encoder.layer.0.pre_intermediate_act.x_min', 'encoder.layer.3.attention.output.dense.fc_scaling_factor', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.8.attention.self.query.weight_integer', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.7.intermediate.output_activation.act_scaling_factor', 'encoder.layer.4.attention.self.value.bias_integer', 'encoder.layer.6.attention.self.query_activation.x_max', 'encoder.layer.10.output.ln_input_act.x_min', 'encoder.layer.8.output.output_activation.act_scaling_factor', 'encoder.layer.1.attention.output.LayerNorm.shift', 'encoder.layer.10.output.output_activation.x_min', 'encoder.layer.7.attention.output.LayerNorm.activation.x_min', 'encoder.layer.7.attention.output.output_activation.act_scaling_factor', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.10.pre_intermediate_act.act_scaling_factor', 'encoder.layer.8.output.output_activation.x_max', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.output_activation.x_min', 'encoder.layer.11.attention.self.output_activation.x_max', 'encoder.layer.0.attention.self.output_activation.x_max', 'encoder.layer.0.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.8.attention.self.key.fc_scaling_factor', 'encoder.layer.2.attention.self.key_activation.x_max', 'encoder.layer.7.intermediate.output_activation.x_max', 'encoder.layer.10.output.ln_input_act.x_max', 'embeddings.LayerNorm.activation.act_scaling_factor', 'encoder.layer.3.attention.output.output_activation.act_scaling_factor', 'encoder.layer.10.intermediate.dense.weight_integer', 'encoder.layer.11.intermediate.dense.fc_scaling_factor', 'encoder.layer.3.attention.self.value.weight_integer', 'encoder.layer.9.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.8.attention.self.value.bias_integer', 'encoder.layer.0.attention.self.key_activation.x_min', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.6.output.output_activation.x_max', 'encoder.layer.7.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.0.attention.output.output_activation.x_max', 'encoder.layer.3.attention.output.ln_input_act.x_min', 'encoder.layer.4.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.10.output.dense.weight_integer', 'encoder.layer.4.attention.output.LayerNorm.activation.x_max', 'encoder.layer.7.attention.output.LayerNorm.activation.x_max', 'encoder.layer.5.attention.self.softmax.act.x_min', 'encoder.layer.3.attention.self.query_activation.x_min', 'encoder.layer.11.attention.output.ln_input_act.x_min', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.2.attention.output.ln_input_act.x_min', 'encoder.layer.6.attention.self.value.weight_integer', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.0.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.8.output.LayerNorm.activation.x_max', 'encoder.layer.4.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.9.output.dense.fc_scaling_factor', 'encoder.layer.4.attention.output.output_activation.act_scaling_factor', 'encoder.layer.8.output.ln_input_act.act_scaling_factor', 'encoder.layer.9.intermediate.dense.bias_integer', 'encoder.layer.6.attention.self.query.weight_integer', 'encoder.layer.1.attention.self.value_activation.act_scaling_factor', 'encoder.layer.0.attention.output.output_activation.x_min', 'encoder.layer.4.attention.self.query_activation.act_scaling_factor', 'encoder.layer.4.output.dense.weight_integer', 'encoder.layer.1.attention.output.output_activation.x_max', 'encoder.layer.11.output.LayerNorm.activation.x_min', 'encoder.layer.2.attention.output.dense.weight_integer', 'encoder.layer.1.output.output_activation.act_scaling_factor', 'encoder.layer.4.attention.output.LayerNorm.activation.x_min', 'encoder.layer.5.pre_output_act.act_scaling_factor', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.1.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.4.attention.self.value.fc_scaling_factor', 'encoder.layer.8.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.2.pre_intermediate_act.x_max', 'encoder.layer.9.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.11.attention.self.query.fc_scaling_factor', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.3.attention.output.output_activation.x_min', 'encoder.layer.2.attention.output.dense.fc_scaling_factor', 'encoder.layer.4.attention.self.output_activation.act_scaling_factor', 'encoder.layer.6.output.output_activation.x_min', 'encoder.layer.1.attention.self.key_activation.x_min', 'encoder.layer.3.attention.self.value_activation.act_scaling_factor', 'encoder.layer.3.output.ln_input_act.act_scaling_factor', 'encoder.layer.2.attention.self.value.weight_integer', 'encoder.layer.4.attention.self.query_activation.x_max', 'encoder.layer.2.attention.self.value_activation.x_min', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.10.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.1.attention.output.dense.weight_integer', 'encoder.layer.11.attention.self.softmax.act.x_min', 'encoder.layer.5.attention.output.dense.bias_integer', 'encoder.layer.1.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.0.attention.output.dense.fc_scaling_factor', 'encoder.layer.5.attention.output.output_activation.x_min', 'encoder.layer.5.intermediate.output_activation.act_scaling_factor', 'embeddings.output_activation.x_max', 'encoder.layer.6.output.LayerNorm.activation.x_max', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.output.dense.bias', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.10.attention.self.value.fc_scaling_factor', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.5.output.dense.weight', 'encoder.layer.4.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.6.output.LayerNorm.shift', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.9.intermediate.output_activation.act_scaling_factor', 'encoder.layer.7.intermediate.output_activation.x_min', 'encoder.layer.10.attention.output.LayerNorm.shift', 'encoder.layer.10.attention.self.query_activation.act_scaling_factor', 'embeddings.embeddings_act1.x_max', 'encoder.layer.8.attention.self.key_activation.x_min', 'encoder.layer.9.attention.self.output_activation.act_scaling_factor', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.2.attention.output.LayerNorm.activation.x_min', 'encoder.layer.0.attention.output.dense.bias_integer', 'encoder.layer.9.attention.output.ln_input_act.x_min', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.4.pre_output_act.x_max', 'encoder.layer.8.attention.self.softmax.act.x_max', 'encoder.layer.9.attention.output.ln_input_act.x_max', 'encoder.layer.5.attention.self.key.bias_integer', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.2.attention.self.query.bias_integer', 'encoder.layer.5.attention.self.output_activation.x_min', 'encoder.layer.5.output.ln_input_act.x_min', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.9.attention.self.key.weight_integer', 'encoder.layer.10.output.output_activation.x_max', 'encoder.layer.0.intermediate.output_activation.x_max', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.3.attention.self.output_activation.x_min', 'encoder.layer.7.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.3.attention.output.dense.bias_integer', 'encoder.layer.8.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.9.attention.self.query_activation.x_min', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.output.ln_input_act.x_min', 'encoder.layer.6.output.dense.fc_scaling_factor', 'encoder.layer.7.attention.self.query.fc_scaling_factor', 'encoder.layer.3.attention.self.key_activation.act_scaling_factor', 'encoder.layer.9.attention.self.query_activation.act_scaling_factor', 'encoder.layer.0.attention.self.query_activation.x_max', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.6.attention.output.output_activation.act_scaling_factor', 'encoder.layer.3.pre_output_act.x_min', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.attention.self.key.fc_scaling_factor', 'encoder.layer.9.output.ln_input_act.x_min', 'encoder.layer.6.attention.self.value.fc_scaling_factor', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.6.attention.self.key.bias_integer', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.1.attention.self.output_activation.x_max', 'encoder.layer.3.attention.self.query.bias_integer', 'encoder.layer.0.output.ln_input_act.act_scaling_factor', 'encoder.layer.8.output.dense.weight_integer', 'encoder.layer.4.intermediate.output_activation.x_min', 'encoder.layer.11.attention.output.ln_input_act.x_max', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.4.attention.output.ln_input_act.x_max', 'encoder.layer.8.attention.self.output_activation.act_scaling_factor', 'encoder.layer.6.attention.output.dense.bias_integer', 'encoder.layer.0.output.ln_input_act.x_max', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.1.intermediate.output_activation.x_min', 'encoder.layer.8.attention.self.query_activation.x_min', 'encoder.layer.5.attention.output.dense.bias', 'embeddings.embeddings_act1.x_min', 'encoder.layer.2.intermediate.output_activation.x_min', 'encoder.layer.9.attention.self.softmax.act.x_min', 'encoder.layer.1.attention.self.key.fc_scaling_factor', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.2.attention.self.key.bias_integer', 'encoder.layer.1.attention.output.ln_input_act.x_min', 'encoder.layer.0.attention.self.value_activation.act_scaling_factor', 'encoder.layer.0.attention.output.dense.weight_integer', 'encoder.layer.7.attention.self.value.weight_integer', 'encoder.layer.1.attention.output.dense.bias_integer', 'encoder.layer.3.pre_intermediate_act.x_min', 'encoder.layer.10.attention.self.key_activation.x_max', 'encoder.layer.1.output.dense.bias_integer', 'encoder.layer.11.attention.self.key_activation.act_scaling_factor', 'encoder.layer.1.attention.self.value_activation.x_max', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.output.LayerNorm.activation.x_min', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.9.attention.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.activation.x_max', 'encoder.layer.8.attention.self.value_activation.x_max', 'encoder.layer.10.pre_intermediate_act.x_max', 'encoder.layer.7.output.dense.bias', 'encoder.layer.10.attention.self.value_activation.act_scaling_factor', 'encoder.layer.8.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.8.output.dense.weight', 'encoder.layer.10.attention.self.value_activation.x_min', 'encoder.layer.2.attention.self.softmax.act.x_min', 'encoder.layer.11.output.dense.fc_scaling_factor', 'encoder.layer.1.attention.self.output_activation.act_scaling_factor', 'encoder.layer.1.attention.self.softmax.act.x_min', 'encoder.layer.5.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.9.attention.self.key_activation.act_scaling_factor', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.1.attention.self.key_activation.x_max', 'encoder.layer.1.attention.self.softmax.act.act_scaling_factor', 'encoder.layer.0.pre_intermediate_act.x_max', 'encoder.layer.7.attention.self.value_activation.x_max', 'encoder.layer.6.output.dense.weight', 'encoder.layer.4.pre_intermediate_act.x_max', 'encoder.layer.3.pre_intermediate_act.x_max', 'encoder.layer.0.pre_output_act.x_min', 'encoder.layer.0.output.ln_input_act.x_min', 'encoder.layer.3.attention.output.ln_input_act.act_scaling_factor', 'encoder.layer.7.output.ln_input_act.x_min', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.11.attention.self.key_activation.x_max', 'encoder.layer.5.output.dense.fc_scaling_factor', 'encoder.layer.11.output.LayerNorm.activation.act_scaling_factor', 'encoder.layer.5.attention.self.value_activation.x_min', 'encoder.layer.3.output.LayerNorm.shift', 'encoder.layer.0.intermediate.dense.fc_scaling_factor', 'encoder.layer.5.attention.output.dense.weight_integer', 'encoder.layer.8.output.dense.fc_scaling_factor', 'encoder.layer.8.attention.self.output_activation.x_max', 'encoder.layer.1.output.dense.weight_integer', 'encoder.layer.2.intermediate.output_activation.x_max', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.4.attention.output.ln_input_act.x_min', 'encoder.layer.8.pre_output_act.x_min']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

It seems there is differences between the state_dict structure of the saved and loaded model.

Can somebody help please ?