[INFO] Trainer: ngp | 2024-02-18_08-03-00 | cuda | fp16 | /data/stone_nerf
[INFO] #parameters: 13429360
[INFO] Loading latest checkpoint ...
[INFO] Latest checkpoint is /data/stone_nerf/checkpoints/ngp_ep0052.pth
Traceback (most recent call last):
File "/data/nerf-navigation-master/simulate.py", line 194, in
trainer = Trainer('ngp', opt, model, device=device, workspace=opt.workspace, criterion=criterion, fp16=opt.fp16, metrics=metrics, use_checkpoint=opt.ckpt)
File "/data/nerf-navigation-master/nerf/utils.py", line 366, in init
self.load_checkpoint()
File "/data/nerf-navigation-master/nerf/utils.py", line 1017, in load_checkpoint
missing_keys, unexpected_keys = self.model.load_state_dict(checkpoint_dict['model'], strict=False)
File "/root/miniconda3/envs/nerf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for NeRFNetwork:
size mismatch for encoder.embeddings: copying a param with shape torch.Size([6328848, 2]) from checkpoint, the shape in current model is torch.Size([6710008, 2])
In this example, the error is caused by a mismatch in the dimensions of the model parameters. Specifically, the shape of the encoder.embeddings parameter does not match what is expected. The shape of the parameters in the current model is torch.Size([6710008, 2]), while the shape loaded from the checkpoint is torch.Size([6328848, 2]). This mismatch causes an error when loading the model.
[INFO] Trainer: ngp | 2024-02-18_08-03-00 | cuda | fp16 | /data/stone_nerf [INFO] #parameters: 13429360 [INFO] Loading latest checkpoint ... [INFO] Latest checkpoint is /data/stone_nerf/checkpoints/ngp_ep0052.pth Traceback (most recent call last): File "/data/nerf-navigation-master/simulate.py", line 194, in
trainer = Trainer('ngp', opt, model, device=device, workspace=opt.workspace, criterion=criterion, fp16=opt.fp16, metrics=metrics, use_checkpoint=opt.ckpt)
File "/data/nerf-navigation-master/nerf/utils.py", line 366, in init
self.load_checkpoint()
File "/data/nerf-navigation-master/nerf/utils.py", line 1017, in load_checkpoint
missing_keys, unexpected_keys = self.model.load_state_dict(checkpoint_dict['model'], strict=False)
File "/root/miniconda3/envs/nerf/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for NeRFNetwork:
size mismatch for encoder.embeddings: copying a param with shape torch.Size([6328848, 2]) from checkpoint, the shape in current model is torch.Size([6710008, 2])
In this example, the error is caused by a mismatch in the dimensions of the model parameters. Specifically, the shape of the encoder.embeddings parameter does not match what is expected. The shape of the parameters in the current model is torch.Size([6710008, 2]), while the shape loaded from the checkpoint is torch.Size([6328848, 2]). This mismatch causes an error when loading the model.
How can I fix it?