ayaanzhaque / instruct-nerf2nerf

Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions (ICCV 2023)
https://instruct-nerf2nerf.github.io/
MIT License
769 stars 64 forks source link

Can we use the nerfacto-huge in instruct-nerf2nerf? #49

Closed rockywind closed 8 months ago

rockywind commented 1 year ago

Hi, thank you for you sharing the great work. I met the error below. The first nerf model is nerfacto-huge.

│ module.py:2041 in load_state_dict                         │
│                                                           │
│   2038 │   │   │   │   │   │   ', '.join('"{}"'.format(k) │
│   2039 │   │                                              │
│   2040 │   │   if len(error_msgs) > 0:                    │
│ ❱ 2041 │   │   │   raise RuntimeError('Error(s) in loadin │
│   2042 │   │   │   │   │   │   │      self.__class__.__na │
│   2043 │   │   return _IncompatibleKeys(missing_keys, une │
│   2044                                                    │
╰───────────────────────────────────────────────────────────╯
RuntimeError: Error(s) in loading state_dict for 
InstructNeRF2NeRFModel:
        size mismatch for field.mlp_base_grid.hash_table: 
copying a param with shape torch.Size([33554432, 2]) from 
checkpoint, the shape in current model is 
torch.Size([8388608, 2]).
        size mismatch for 
field.mlp_base_grid.tcnn_encoding.params: copying a param 
with shape torch.Size([47857600]) from checkpoint, the shape 
in current model is torch.Size([12196240]).
        size mismatch for 
field.mlp_base_mlp.tcnn_encoding.params: copying a param with
shape torch.Size([12288]) from checkpoint, the shape in 
current model is torch.Size([3072]).
        size mismatch for field.mlp_base.0.hash_table: 
copying a param with shape torch.Size([33554432, 2]) from 
checkpoint, the shape in current model is 
torch.Size([8388608, 2]).
        size mismatch for 
field.mlp_base.0.tcnn_encoding.params: copying a param with 
shape torch.Size([47857600]) from checkpoint, the shape in 
current model is torch.Size([12196240]).
        size mismatch for 
field.mlp_base.1.tcnn_encoding.params: copying a param with 
shape torch.Size([12288]) from checkpoint, the shape in 
current model is torch.Size([3072]).
        size mismatch for 
field.mlp_head.tcnn_encoding.params: copying a param with 
shape torch.Size([86016]) from checkpoint, the shape in 
current model is torch.Size([9216]).
        size mismatch for 
proposal_networks.0.encoding.tcnn_encoding.params: copying a 
param with shape torch.Size([913264]) from checkpoint, the 
shape in current model is torch.Size([766528]).
        size mismatch for 
proposal_networks.0.mlp_base.0.tcnn_encoding.params: copying 
a param with shape torch.Size([913264]) from checkpoint, the 
shape in current model is torch.Size([766528]).
        size mismatch for 
proposal_networks.1.encoding.hash_table: copying a param with
shape torch.Size([917504, 2]) from checkpoint, the shape in 
current model is torch.Size([655360, 2]).
        size mismatch for 
proposal_networks.1.encoding.tcnn_encoding.params: copying a 
param with shape torch.Size([1412224]) from checkpoint, the 
shape in current model is torch.Size([860160]).
        size mismatch for 
proposal_networks.1.mlp_base.0.hash_table: copying a param 
with shape torch.Size([917504, 2]) from checkpoint, the shape
in current model is torch.Size([655360, 2]).
        size mismatch for 
proposal_networks.1.mlp_base.0.tcnn_encoding.params: copying 
a param with shape torch.Size([1412224]) from checkpoint, the
shape in current model is torch.Size([860160]).
ayaanzhaque commented 1 year ago

Unfortunately, out of the box In2n is only set up to work with nerfacto huge. However, it should be pretty easy to edit the code to make this possible. You will have to change the config as well as change in2n.py to inherit from nerfacto-huge instead of nerfacto.

rockywind commented 1 year ago

Thanks I have a try.

brgrp commented 11 months ago

@rockywind did u get it working?

ayaanzhaque commented 11 months ago

NerfactoHuge config from nerfstudio: https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/configs/method_configs.py#L157C1-L202

Basically, you will have to rewrite the config here: https://github.com/ayaanzhaque/instruct-nerf2nerf/blob/main/in2n/in2n_config.py#L30-L68

The config will have to match the nerfacto huge config in terms of datamanager config and model config to make it have the same parameters as nerfacto huge.