abhishekkrthakur / bert-sentiment

MIT License
268 stars 103 forks source link

Loading DataParallel GPU model on CPU #4

Closed nipunsadvilkar closed 4 years ago

nipunsadvilkar commented 4 years ago

Follow up to #1 issue @abhishekkrthakur : Can you give any leads on how to load DataParallel GPU model on CPU? As per pytorch docs tried following but still raises above RuntimeError

device = torch.device('cpu')
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=device))
abhishekkrthakur commented 4 years ago

hi. in load state dict, you have to use map_location to cpu.

abhishekkrthakur commented 4 years ago

and before that you need to use DataParallel as mentioned in #1

nipunsadvilkar commented 4 years ago

I did but still getting error.

MODEL.load_state_dict(torch.load(config.MODEL_PATH, map_location={"cuda:0" : "cpu"}))
nipunsadvilkar commented 4 years ago

@abhishekkrthakur Here is Traceback for your reference after setting maplocation to cpu

Traceback (most recent call last):
  File "predict.py", line 61, in <module>
    positive_prediction = sentence_prediction(sentence)
  File "predict.py", line 54, in sentence_prediction
    token_type_ids=token_type_ids
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 146, in forward
    "them on device: {}".format(self.src_device_obj, t.device))
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu

with model loaded as:

DEVICE = torch.device('cpu')
PREDICTION_DICT = dict()
memory = joblib.Memory("/content/bert-sentiment/input/", verbose=0)

MODEL = BERTBaseUncased()
MODEL = nn.DataParallel(MODEL)
MODEL.load_state_dict(torch.load(config.MODEL_PATH,  map_location=DEVICE))
MODEL.to(DEVICE)
MODEL.eval()
abhishekkrthakur commented 4 years ago

Could you please post the stacktrace when you don't use DataParallel and set map_location='cpu' ?

nipunsadvilkar commented 4 years ago

Sure. Updated above comment with model load lines for your reference.

without DataParallel (commented out) and with map_location='cpu'

DEVICE = torch.device('cpu')
PREDICTION_DICT = dict()
memory = joblib.Memory("/content/bert-sentiment/input/", verbose=0)

MODEL = BERTBaseUncased()
# MODEL = nn.DataParallel(MODEL)
MODEL.load_state_dict(torch.load(config.MODEL_PATH,  map_location=DEVICE))
MODEL.to(DEVICE)
MODEL.eval()

Traceback:

2020-03-29 15:39:15.549276: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
  File "predict.py", line 17, in <module>
    MODEL.load_state_dict(torch.load(config.MODEL_PATH,  map_location=DEVICE))
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 830, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for BERTBaseUncased:
    Missing key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.intermediate.dense.weight", "bert.encoder.layer.1.intermediate.dense.bias", "bert.encoder.layer.1.output.dense.weight", "bert.encoder.layer.1.output.dense.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer.2.attention.self.query.weight", "bert.encoder.layer.2.attention.self.query.bias", "bert.encoder.layer.2.attention.self.key.weight", "bert.encoder.layer.2.attention.self.key.bias", "bert.encoder.layer.2.attention.self.value.weight", "bert.encoder.layer.2.attention.self.value.bias", "bert.encoder.layer.2.attention.output.dense.weight", "bert.encoder.layer.2.attention.output.dense.bias", "bert.encoder.layer.2.attention.output.LayerNorm.weight", "bert.encoder.layer.2.attention.output.LayerNorm.bias", "bert.encoder.layer.2.intermediate.dense.weight", "bert.encoder.layer.2.intermediate.dense.bias", "bert.encoder.layer.2.output.dense.weight", "bert.encoder.layer.2.output.dense.bias", "bert.encoder.layer.2.output.LayerNorm.weight", "bert.encoder.layer.2.output.LayerNorm.bias", "bert.encoder.layer.3.attention.self.query.weight", "bert.encoder.layer.3.attention.self.query.bias", "bert.encoder.layer.3.attention.self.key.weight", "bert.encoder.layer.3.attention.self.key.bias", "bert.encoder.layer.3.attention.self.value.weight", "bert.encoder.layer.3.attention.self.value.bias", "bert.encoder.layer.3.attention.output.dense.weight", "bert.encoder.layer.3.attention.output.dense.bias", "bert.encoder.layer.3.attention.output.LayerNorm.weight", "bert.encoder.layer.3.attention.output.LayerNorm.bias", "bert.encoder.layer.3.intermediate.dense.weight", "bert.encoder.layer.3.intermediate.dense.bias", "bert.encoder.layer.3.output.dense.weight", "bert.encoder.layer.3.output.dense.bias", "bert.encoder.layer.3.output.LayerNorm.weight", "bert.encoder.layer.3.output.LayerNorm.bias", "bert.encoder.layer.4.attention.self.query.weight", "bert.encoder.layer.4.attention.self.query.bias", "bert.encoder.layer.4.attention.self.key.weight", "bert.encoder.layer.4.attention.self.key.bias", "bert.encoder.layer.4.attention.self.value.weight", "bert.encoder.layer.4.attention.self.value.bias", "bert.encoder.layer.4.attention.output.dense.weight", "bert.encoder.layer.4.attention.output.dense.bias", "bert.encoder.layer.4.attention.output.LayerNorm.weight", "bert.encoder.layer.4.attention.output.LayerNorm.bias", "bert.encoder.layer.4.intermediate.dense.weight", "bert.encoder.layer.4.intermediate.dense.bias", "bert.encoder.layer.4.output.dense.weight", "bert.encoder.layer.4.output.dense.bias", "bert.encoder.layer.4.output.LayerNorm.weight", "bert.encoder.layer.4.output.LayerNorm.bias", "bert.encoder.layer.5.attention.self.query.weight", "bert.encoder.layer.5.attention.self.query.bias", "bert.encoder.layer.5.attention.self.key.weight", "bert.encoder.layer.5.attention.self.key.bias", "bert.encoder.layer.5.attention.self.value.weight", "bert.encoder.layer.5.attention.self.value.bias", "bert.encoder.layer.5.attention.output.dense.weight", "bert.encoder.layer.5.attention.output.dense.bias", "bert.encoder.layer.5.attention.output.LayerNorm.weight", "bert.encoder.layer.5.attention.output.LayerNorm.bias", "bert.encoder.layer.5.intermediate.dense.weight", "bert.encoder.layer.5.intermediate.dense.bias", "bert.encoder.layer.5.output.dense.weight", "bert.encoder.layer.5.output.dense.bias", "bert.encoder.layer.5.output.LayerNorm.weight", "bert.encoder.layer.5.output.LayerNorm.bias", "bert.encoder.layer.6.attention.self.query.weight", "bert.encoder.layer.6.attention.self.query.bias", "bert.encoder.layer.6.attention.self.key.weight", "bert.encoder.layer.6.attention.self.key.bias", "bert.encoder.layer.6.attention.self.value.weight", "bert.encoder.layer.6.attention.self.value.bias", "bert.encoder.layer.6.attention.output.dense.weight", "bert.encoder.layer.6.attention.output.dense.bias", "bert.encoder.layer.6.attention.output.LayerNorm.weight", "bert.encoder.layer.6.attention.output.LayerNorm.bias", "bert.encoder.layer.6.intermediate.dense.weight", "bert.encoder.layer.6.intermediate.dense.bias", "bert.encoder.layer.6.output.dense.weight", "bert.encoder.layer.6.output.dense.bias", "bert.encoder.layer.6.output.LayerNorm.weight", "bert.encoder.layer.6.output.LayerNorm.bias", "bert.encoder.layer.7.attention.self.query.weight", "bert.encoder.layer.7.attention.self.query.bias", "bert.encoder.layer.7.attention.self.key.weight", "bert.encoder.layer.7.attention.self.key.bias", "bert.encoder.layer.7.attention.self.value.weight", "bert.encoder.layer.7.attention.self.value.bias", "bert.encoder.layer.7.attention.output.dense.weight", "bert.encoder.layer.7.attention.output.dense.bias", "bert.encoder.layer.7.attention.output.LayerNorm.weight", "bert.encoder.layer.7.attention.output.LayerNorm.bias", "bert.encoder.layer.7.intermediate.dense.weight", "bert.encoder.layer.7.intermediate.dense.bias", "bert.encoder.layer.7.output.dense.weight", "bert.encoder.layer.7.output.dense.bias", "bert.encoder.layer.7.output.LayerNorm.weight", "bert.encoder.layer.7.output.LayerNorm.bias", "bert.encoder.layer.8.attention.self.query.weight", "bert.encoder.layer.8.attention.self.query.bias", "bert.encoder.layer.8.attention.self.key.weight", "bert.encoder.layer.8.attention.self.key.bias", "bert.encoder.layer.8.attention.self.value.weight", "bert.encoder.layer.8.attention.self.value.bias", "bert.encoder.layer.8.attention.output.dense.weight", "bert.encoder.layer.8.attention.output.dense.bias", "bert.encoder.layer.8.attention.output.LayerNorm.weight", "bert.encoder.layer.8.attention.output.LayerNorm.bias", "bert.encoder.layer.8.intermediate.dense.weight", "bert.encoder.layer.8.intermediate.dense.bias", "bert.encoder.layer.8.output.dense.weight", "bert.encoder.layer.8.output.dense.bias", "bert.encoder.layer.8.output.LayerNorm.weight", "bert.encoder.layer.8.output.LayerNorm.bias", "bert.encoder.layer.9.attention.self.query.weight", "bert.encoder.layer.9.attention.self.query.bias", "bert.encoder.layer.9.attention.self.key.weight", "bert.encoder.layer.9.attention.self.key.bias", "bert.encoder.layer.9.attention.self.value.weight", "bert.encoder.layer.9.attention.self.value.bias", "bert.encoder.layer.9.attention.output.dense.weight", "bert.encoder.layer.9.attention.output.dense.bias", "bert.encoder.layer.9.attention.output.LayerNorm.weight", "bert.encoder.layer.9.attention.output.LayerNorm.bias", "bert.encoder.layer.9.intermediate.dense.weight", "bert.encoder.layer.9.intermediate.dense.bias", "bert.encoder.layer.9.output.dense.weight", "bert.encoder.layer.9.output.dense.bias", "bert.encoder.layer.9.output.LayerNorm.weight", "bert.encoder.layer.9.output.LayerNorm.bias", "bert.encoder.layer.10.attention.self.query.weight", "bert.encoder.layer.10.attention.self.query.bias", "bert.encoder.layer.10.attention.self.key.weight", "bert.encoder.layer.10.attention.self.key.bias", "bert.encoder.layer.10.attention.self.value.weight", "bert.encoder.layer.10.attention.self.value.bias", "bert.encoder.layer.10.attention.output.dense.weight", "bert.encoder.layer.10.attention.output.dense.bias", "bert.encoder.layer.10.attention.output.LayerNorm.weight", "bert.encoder.layer.10.attention.output.LayerNorm.bias", "bert.encoder.layer.10.intermediate.dense.weight", "bert.encoder.layer.10.intermediate.dense.bias", "bert.encoder.layer.10.output.dense.weight", "bert.encoder.layer.10.output.dense.bias", "bert.encoder.layer.10.output.LayerNorm.weight", "bert.encoder.layer.10.output.LayerNorm.bias", "bert.encoder.layer.11.attention.self.query.weight", "bert.encoder.layer.11.attention.self.query.bias", "bert.encoder.layer.11.attention.self.key.weight", "bert.encoder.layer.11.attention.self.key.bias", "bert.encoder.layer.11.attention.self.value.weight", "bert.encoder.layer.11.attention.self.value.bias", "bert.encoder.layer.11.attention.output.dense.weight", "bert.encoder.layer.11.attention.output.dense.bias", "bert.encoder.layer.11.attention.output.LayerNorm.weight", "bert.encoder.layer.11.attention.output.LayerNorm.bias", "bert.encoder.layer.11.intermediate.dense.weight", "bert.encoder.layer.11.intermediate.dense.bias", "bert.encoder.layer.11.output.dense.weight", "bert.encoder.layer.11.output.dense.bias", "bert.encoder.layer.11.output.LayerNorm.weight", "bert.encoder.layer.11.output.LayerNorm.bias", "bert.pooler.dense.weight", "bert.pooler.dense.bias", "out.weight", "out.bias". 
    Unexpected key(s) in state_dict: "module.bert.embeddings.word_embeddings.weight", "module.bert.embeddings.position_embeddings.weight", "module.bert.embeddings.token_type_embeddings.weight", "module.bert.embeddings.LayerNorm.weight", "module.bert.embeddings.LayerNorm.bias", "module.bert.encoder.layer.0.attention.self.query.weight", "module.bert.encoder.layer.0.attention.self.query.bias", "module.bert.encoder.layer.0.attention.self.key.weight", "module.bert.encoder.layer.0.attention.self.key.bias", "module.bert.encoder.layer.0.attention.self.value.weight", "module.bert.encoder.layer.0.attention.self.value.bias", "module.bert.encoder.layer.0.attention.output.dense.weight", "module.bert.encoder.layer.0.attention.output.dense.bias", "module.bert.encoder.layer.0.attention.output.LayerNorm.weight", "module.bert.encoder.layer.0.attention.output.LayerNorm.bias", "module.bert.encoder.layer.0.intermediate.dense.weight", "module.bert.encoder.layer.0.intermediate.dense.bias", "module.bert.encoder.layer.0.output.dense.weight", "module.bert.encoder.layer.0.output.dense.bias", "module.bert.encoder.layer.0.output.LayerNorm.weight", "module.bert.encoder.layer.0.output.LayerNorm.bias", "module.bert.encoder.layer.1.attention.self.query.weight", "module.bert.encoder.layer.1.attention.self.query.bias", "module.bert.encoder.layer.1.attention.self.key.weight", "module.bert.encoder.layer.1.attention.self.key.bias", "module.bert.encoder.layer.1.attention.self.value.weight", "module.bert.encoder.layer.1.attention.self.value.bias", "module.bert.encoder.layer.1.attention.output.dense.weight", "module.bert.encoder.layer.1.attention.output.dense.bias", "module.bert.encoder.layer.1.attention.output.LayerNorm.weight", "module.bert.encoder.layer.1.attention.output.LayerNorm.bias", "module.bert.encoder.layer.1.intermediate.dense.weight", "module.bert.encoder.layer.1.intermediate.dense.bias", "module.bert.encoder.layer.1.output.dense.weight", "module.bert.encoder.layer.1.output.dense.bias", "module.bert.encoder.layer.1.output.LayerNorm.weight", "module.bert.encoder.layer.1.output.LayerNorm.bias", "module.bert.encoder.layer.2.attention.self.query.weight", "module.bert.encoder.layer.2.attention.self.query.bias", "module.bert.encoder.layer.2.attention.self.key.weight", "module.bert.encoder.layer.2.attention.self.key.bias", "module.bert.encoder.layer.2.attention.self.value.weight", "module.bert.encoder.layer.2.attention.self.value.bias", "module.bert.encoder.layer.2.attention.output.dense.weight", "module.bert.encoder.layer.2.attention.output.dense.bias", "module.bert.encoder.layer.2.attention.output.LayerNorm.weight", "module.bert.encoder.layer.2.attention.output.LayerNorm.bias", "module.bert.encoder.layer.2.intermediate.dense.weight", "module.bert.encoder.layer.2.intermediate.dense.bias", "module.bert.encoder.layer.2.output.dense.weight", "module.bert.encoder.layer.2.output.dense.bias", "module.bert.encoder.layer.2.output.LayerNorm.weight", "module.bert.encoder.layer.2.output.LayerNorm.bias", "module.bert.encoder.layer.3.attention.self.query.weight", "module.bert.encoder.layer.3.attention.self.query.bias", "module.bert.encoder.layer.3.attention.self.key.weight", "module.bert.encoder.layer.3.attention.self.key.bias", "module.bert.encoder.layer.3.attention.self.value.weight", "module.bert.encoder.layer.3.attention.self.value.bias", "module.bert.encoder.layer.3.attention.output.dense.weight", "module.bert.encoder.layer.3.attention.output.dense.bias", "module.bert.encoder.layer.3.attention.output.LayerNorm.weight", "module.bert.encoder.layer.3.attention.output.LayerNorm.bias", "module.bert.encoder.layer.3.intermediate.dense.weight", "module.bert.encoder.layer.3.intermediate.dense.bias", "module.bert.encoder.layer.3.output.dense.weight", "module.bert.encoder.layer.3.output.dense.bias", "module.bert.encoder.layer.3.output.LayerNorm.weight", "module.bert.encoder.layer.3.output.LayerNorm.bias", "module.bert.encoder.layer.4.attention.self.query.weight", "module.bert.encoder.layer.4.attention.self.query.bias", "module.bert.encoder.layer.4.attention.self.key.weight", "module.bert.encoder.layer.4.attention.self.key.bias", "module.bert.encoder.layer.4.attention.self.value.weight", "module.bert.encoder.layer.4.attention.self.value.bias", "module.bert.encoder.layer.4.attention.output.dense.weight", "module.bert.encoder.layer.4.attention.output.dense.bias", "module.bert.encoder.layer.4.attention.output.LayerNorm.weight", "module.bert.encoder.layer.4.attention.output.LayerNorm.bias", "module.bert.encoder.layer.4.intermediate.dense.weight", "module.bert.encoder.layer.4.intermediate.dense.bias", "module.bert.encoder.layer.4.output.dense.weight", "module.bert.encoder.layer.4.output.dense.bias", "module.bert.encoder.layer.4.output.LayerNorm.weight", "module.bert.encoder.layer.4.output.LayerNorm.bias", "module.bert.encoder.layer.5.attention.self.query.weight", "module.bert.encoder.layer.5.attention.self.query.bias", "module.bert.encoder.layer.5.attention.self.key.weight", "module.bert.encoder.layer.5.attention.self.key.bias", "module.bert.encoder.layer.5.attention.self.value.weight", "module.bert.encoder.layer.5.attention.self.value.bias", "module.bert.encoder.layer.5.attention.output.dense.weight", "module.bert.encoder.layer.5.attention.output.dense.bias", "module.bert.encoder.layer.5.attention.output.LayerNorm.weight", "module.bert.encoder.layer.5.attention.output.LayerNorm.bias", "module.bert.encoder.layer.5.intermediate.dense.weight", "module.bert.encoder.layer.5.intermediate.dense.bias", "module.bert.encoder.layer.5.output.dense.weight", "module.bert.encoder.layer.5.output.dense.bias", "module.bert.encoder.layer.5.output.LayerNorm.weight", "module.bert.encoder.layer.5.output.LayerNorm.bias", "module.bert.encoder.layer.6.attention.self.query.weight", "module.bert.encoder.layer.6.attention.self.query.bias", "module.bert.encoder.layer.6.attention.self.key.weight", "module.bert.encoder.layer.6.attention.self.key.bias", "module.bert.encoder.layer.6.attention.self.value.weight", "module.bert.encoder.layer.6.attention.self.value.bias", "module.bert.encoder.layer.6.attention.output.dense.weight", "module.bert.encoder.layer.6.attention.output.dense.bias", "module.bert.encoder.layer.6.attention.output.LayerNorm.weight", "module.bert.encoder.layer.6.attention.output.LayerNorm.bias", "module.bert.encoder.layer.6.intermediate.dense.weight", "module.bert.encoder.layer.6.intermediate.dense.bias", "module.bert.encoder.layer.6.output.dense.weight", "module.bert.encoder.layer.6.output.dense.bias", "module.bert.encoder.layer.6.output.LayerNorm.weight", "module.bert.encoder.layer.6.output.LayerNorm.bias", "module.bert.encoder.layer.7.attention.self.query.weight", "module.bert.encoder.layer.7.attention.self.query.bias", "module.bert.encoder.layer.7.attention.self.key.weight", "module.bert.encoder.layer.7.attention.self.key.bias", "module.bert.encoder.layer.7.attention.self.value.weight", "module.bert.encoder.layer.7.attention.self.value.bias", "module.bert.encoder.layer.7.attention.output.dense.weight", "module.bert.encoder.layer.7.attention.output.dense.bias", "module.bert.encoder.layer.7.attention.output.LayerNorm.weight", "module.bert.encoder.layer.7.attention.output.LayerNorm.bias", "module.bert.encoder.layer.7.intermediate.dense.weight", "module.bert.encoder.layer.7.intermediate.dense.bias", "module.bert.encoder.layer.7.output.dense.weight", "module.bert.encoder.layer.7.output.dense.bias", "module.bert.encoder.layer.7.output.LayerNorm.weight", "module.bert.encoder.layer.7.output.LayerNorm.bias", "module.bert.encoder.layer.8.attention.self.query.weight", "module.bert.encoder.layer.8.attention.self.query.bias", "module.bert.encoder.layer.8.attention.self.key.weight", "module.bert.encoder.layer.8.attention.self.key.bias", "module.bert.encoder.layer.8.attention.self.value.weight", "module.bert.encoder.layer.8.attention.self.value.bias", "module.bert.encoder.layer.8.attention.output.dense.weight", "module.bert.encoder.layer.8.attention.output.dense.bias", "module.bert.encoder.layer.8.attention.output.LayerNorm.weight", "module.bert.encoder.layer.8.attention.output.LayerNorm.bias", "module.bert.encoder.layer.8.intermediate.dense.weight", "module.bert.encoder.layer.8.intermediate.dense.bias", "module.bert.encoder.layer.8.output.dense.weight", "module.bert.encoder.layer.8.output.dense.bias", "module.bert.encoder.layer.8.output.LayerNorm.weight", "module.bert.encoder.layer.8.output.LayerNorm.bias", "module.bert.encoder.layer.9.attention.self.query.weight", "module.bert.encoder.layer.9.attention.self.query.bias", "module.bert.encoder.layer.9.attention.self.key.weight", "module.bert.encoder.layer.9.attention.self.key.bias", "module.bert.encoder.layer.9.attention.self.value.weight", "module.bert.encoder.layer.9.attention.self.value.bias", "module.bert.encoder.layer.9.attention.output.dense.weight", "module.bert.encoder.layer.9.attention.output.dense.bias", "module.bert.encoder.layer.9.attention.output.LayerNorm.weight", "module.bert.encoder.layer.9.attention.output.LayerNorm.bias", "module.bert.encoder.layer.9.intermediate.dense.weight", "module.bert.encoder.layer.9.intermediate.dense.bias", "module.bert.encoder.layer.9.output.dense.weight", "module.bert.encoder.layer.9.output.dense.bias", "module.bert.encoder.layer.9.output.LayerNorm.weight", "module.bert.encoder.layer.9.output.LayerNorm.bias", "module.bert.encoder.layer.10.attention.self.query.weight", "module.bert.encoder.layer.10.attention.self.query.bias", "module.bert.encoder.layer.10.attention.self.key.weight", "module.bert.encoder.layer.10.attention.self.key.bias", "module.bert.encoder.layer.10.attention.self.value.weight", "module.bert.encoder.layer.10.attention.self.value.bias", "module.bert.encoder.layer.10.attention.output.dense.weight", "module.bert.encoder.layer.10.attention.output.dense.bias", "module.bert.encoder.layer.10.attention.output.LayerNorm.weight", "module.bert.encoder.layer.10.attention.output.LayerNorm.bias", "module.bert.encoder.layer.10.intermediate.dense.weight", "module.bert.encoder.layer.10.intermediate.dense.bias", "module.bert.encoder.layer.10.output.dense.weight", "module.bert.encoder.layer.10.output.dense.bias", "module.bert.encoder.layer.10.output.LayerNorm.weight", "module.bert.encoder.layer.10.output.LayerNorm.bias", "module.bert.encoder.layer.11.attention.self.query.weight", "module.bert.encoder.layer.11.attention.self.query.bias", "module.bert.encoder.layer.11.attention.self.key.weight", "module.bert.encoder.layer.11.attention.self.key.bias", "module.bert.encoder.layer.11.attention.self.value.weight", "module.bert.encoder.layer.11.attention.self.value.bias", "module.bert.encoder.layer.11.attention.output.dense.weight", "module.bert.encoder.layer.11.attention.output.dense.bias", "module.bert.encoder.layer.11.attention.output.LayerNorm.weight", "module.bert.encoder.layer.11.attention.output.LayerNorm.bias", "module.bert.encoder.layer.11.intermediate.dense.weight", "module.bert.encoder.layer.11.intermediate.dense.bias", "module.bert.encoder.layer.11.output.dense.weight", "module.bert.encoder.layer.11.output.dense.bias", "module.bert.encoder.layer.11.output.LayerNorm.weight", "module.bert.encoder.layer.11.output.LayerNorm.bias", "module.bert.pooler.dense.weight", "module.bert.pooler.dense.bias", "module.out.weight", "module.out.bias". 
abhishekkrthakur commented 4 years ago

sorry I lost track of this. I dont know if you have found the solution already but if not, then you should use the following for saving the model:

torch.save(model.module.state_dict(), save_path)

Then you will be able to load it on cpu and gpu without dataparallel.

I'm closing this issue . Feel free to re-open.