Kaixhin / Rainbow

Rainbow: Combining Improvements in Deep Reinforcement Learning
MIT License
1.56k stars 282 forks source link

Pre-trained models param mismatch #54

Closed asaran closed 4 years ago

asaran commented 5 years ago

Are the pretrained model files correct/linked to the correct commit? Tried running the evaluation with pretrained models of v1.3 and v1.4 but receiving a runtime error "Error(s) in loading state_dict for DQN"

Traceback (most recent call last): File "main.py", line 82, in dqn = Agent(args, env) File "/home/akanksha/Documents/Rainbow/agent.py", line 24, in init self.online_net.load_state_dict(torch.load(args.model, map_location='cpu')) File "/home/akanksha/anaconda3/envs/rainbow/lib/python3.7/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for DQN: Missing key(s) in state_dict: "convs.4.weight", "convs.4.bias". size mismatch for convs.0.weight: copying a param with shape torch.Size([32, 4, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 4, 8, 8]). size mismatch for convs.2.weight: copying a param with shape torch.Size([64, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([64, 32, 4, 4]). size mismatch for fc_h_v.weight_mu: copying a param with shape torch.Size([256, 576]) from checkpoint, the shape in current model is torch.Size([512, 3136]). size mismatch for fc_h_v.weight_sigma: copying a param with shape torch.Size([256, 576]) from checkpoint, the shape in current model is torch.Size([512, 3136]). size mismatch for fc_h_v.bias_mu: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for fc_h_v.bias_sigma: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for fc_h_v.weight_epsilon: copying a param with shape torch.Size([256, 576]) from checkpoint, the shape in current model is torch.Size([512, 3136]). size mismatch for fc_h_v.bias_epsilon: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for fc_h_a.weight_mu: copying a param with shape torch.Size([256, 576]) from checkpoint, the shape in current model is torch.Size([512, 3136]). size mismatch for fc_h_a.weight_sigma: copying a param with shape torch.Size([256, 576]) from checkpoint, the shape in current model is torch.Size([512, 3136]). size mismatch for fc_h_a.bias_mu: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for fc_h_a.bias_sigma: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for fc_h_a.weight_epsilon: copying a param with shape torch.Size([256, 576]) from checkpoint, the shape in current model is torch.Size([512, 3136]). size mismatch for fc_h_a.bias_epsilon: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for fc_z_v.weight_mu: copying a param with shape torch.Size([51, 256]) from checkpoint, the shape in current model is torch.Size([51, 512]). size mismatch for fc_z_v.weight_sigma: copying a param with shape torch.Size([51, 256]) from checkpoint, the shape in current model is torch.Size([51, 512]). size mismatch for fc_z_v.weight_epsilon: copying a param with shape torch.Size([51, 256]) from checkpoint, the shape in current model is torch.Size([51, 512]). size mismatch for fc_z_a.weight_mu: copying a param with shape torch.Size([918, 256]) from checkpoint, the shape in current model is torch.Size([306, 512]). size mismatch for fc_z_a.weight_sigma: copying a param with shape torch.Size([918, 256]) from checkpoint, the shape in current model is torch.Size([306, 512]). size mismatch for fc_z_a.bias_mu: copying a param with shape torch.Size([918]) from checkpoint, the shape in current model is torch.Size([306]). size mismatch for fc_z_a.bias_sigma: copying a param with shape torch.Size([918]) from checkpoint, the shape in current model is torch.Size([306]). size mismatch for fc_z_a.weight_epsilon: copying a param with shape torch.Size([918, 256]) from checkpoint, the shape in current model is torch.Size([306, 512]). size mismatch for fc_z_a.bias_epsilon: copying a param with shape torch.Size([918]) from checkpoint, the shape in current model is torch.Size([306]).

Kaixhin commented 5 years ago

v1.4 models are for data-efficient Rainbow, so you will need to run with --architecture data-efficient --hidden-size 256, but v1.3 models should work with the default hyperparameters. Could you please confirm?

asaran commented 5 years ago

Thanks for your reply. v1.4 models work as expected with the additional arguments, however v1.3 models still complain with default hyperparameters:

Traceback (most recent call last): File "main.py", line 82, in dqn = Agent(args, env) File "/home/akanksha/Documents/Rainbow/agent.py", line 24, in init self.online_net.load_state_dict(torch.load(args.model, map_location='cpu')) File "/home/akanksha/anaconda3/envs/rainbow/lib/python3.7/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for DQN: Missing key(s) in state_dict: "convs.0.weight", "convs.0.bias", "convs.2.weight", "convs.2.bias", "convs.4.weight", "convs.4.bias". Unexpected key(s) in state_dict: "conv1.weight", "conv1.bias", "conv2.weight", "conv2.bias", "conv3.weight", "conv3.bias".