ashawkey / stable-dreamfusion

Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.
Apache License 2.0
7.99k stars 710 forks source link

TypeError: grid_encode_forward(): incompatible function arguments. #346

Open njustephencurry opened 8 months ago

njustephencurry commented 8 months ago

Description

the whole message is: 0% 0/100 [00:00<?, ?it/s]Traceback (most recent call last): File "../main_nerf.py", line 154, in trainer.train(train_loader, valid_loader, max_epoch) File "/home/wangwenhao/xhy/torch-ngp/nerf/utils.py", line 648, in train self.train_one_epoch(train_loader) File "/home/wangwenhao/xhy/torch-ngp/nerf/utils.py", line 856, in train_one_epoch self.model.update_extra_state() File "/home/wangwenhao/anaconda3/envs/torch-ngp/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, kwargs) File "/home/wangwenhao/xhy/torch-ngp/nerf/renderer.py", line 481, in update_extra_state sigmas = self.density(cas_xyzs)['sigma'].reshape(-1).detach() File "/home/wangwenhao/xhy/torch-ngp/nerf/network.py", line 129, in density x = self.encoder(x, bound=self.bound) File "/home/wangwenhao/anaconda3/envs/torch-ngp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, *kwargs) File "/home/wangwenhao/xhy/torch-ngp/gridencoder/grid.py", line 156, in forward outputs = grid_encode(inputs, self.embeddings, self.offsets, self.per_level_scale, self.base_resolution, inputs.requires_grad, self.gridtype_id, self.align_corners, self.interp_id) File "/home/wangwenhao/anaconda3/envs/torch-ngp/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 110, in decorate_fwd return fwd(args, kwargs) File "/home/wangwenhao/xhy/torch-ngp/gridencoder/grid.py", line 54, in forward _backend.grid_encode_forward(inputs, embeddings, offsets, outputs, B, D, C, L, S, H, dy_dx, gridtype, align_corners, interpolation) TypeError: grid_encode_forward(): incompatible function arguments. The following argument types are supported:

  1. (arg0: at::Tensor, arg1: at::Tensor, arg2: at::Tensor, arg3: at::Tensor, arg4: int, arg5: int, arg6: int, arg7: int, arg8: float, arg9: int, arg10: Optional[at::Tensor], arg11: int, arg12: bool) -> None

      Invoked with: tensor([[0.0076, 0.0062, 0.0037],
              [0.0026, 0.0026, 0.0108],
              [0.0060, 0.0050, 0.0165],
              ...,
              [0.9925, 1.0000, 0.9779],
              [0.9930, 0.9959, 0.9878],
              [0.9938, 0.9985, 0.9926]], device='cuda:0'), tensor([[-7.7486e-07,  5.3644e-05],
              [-8.2314e-05, -7.3612e-05],
              [-3.8505e-05,  2.6822e-05],
              ...,
              [ 2.7418e-05, -5.0962e-05],
              [ 6.2227e-05,  7.5281e-05],
              [ 4.2677e-05,  9.2626e-05]], device='cuda:0', dtype=torch.float16), tensor([      0,    4920,   18744,   51512,  136696,  352696,  876984, 1401272,
              1925560, 2449848, 2974136, 3498424, 4022712, 4547000, 5071288, 5595576,
              6119864], device='cuda:0', dtype=torch.int32), tensor([[[-3.5000e+00, -1.0234e+00],
               [-1.9200e+03, -9.7803e-01],
               [-1.1660e+03,  2.1504e+00],
               ...,
               [-1.1187e-03,  2.0840e+00],
               [-8.2092e-03, -1.8594e+00],
               [-1.8560e+04, -1.6943e+00]],
    
              [[ 9.4652e-05,  2.0859e+00],
               [ 3.4432e+04, -1.8623e+00],
               [ 6.1696e+04, -1.6992e+00],
               ...,
               [ 1.5182e-02, -1.6963e+00],
               [-5.0125e-03,  2.1934e+00],
               [ 1.5312e+00, -1.6230e+00]],
    
              [[ 2.5750e+02, -1.6973e+00],
               [-8.5545e-04,  2.1953e+00],
               [-8.5831e-05, -1.6191e+00],
               ...,
               [ 5.6576e+04,  1.7842e+00],
               [-4.1600e+02, -1.9463e+00],
               [ 8.9478e-02,  2.1836e+00]],
    
              ...,
    
              [[ 4.9453e+00,  2.2051e+00],
               [ 1.1475e+02,  1.7803e+00],
               [ 1.2398e-05, -1.8457e+00],
               ...,
               [ 0.0000e+00,  0.0000e+00],
               [ 5.9605e-08,  0.0000e+00],
               [ 0.0000e+00,  0.0000e+00]],
    
              [[ 5.9605e-08,  0.0000e+00],
               [ 0.0000e+00,  0.0000e+00],
               [ 5.9605e-08,  0.0000e+00],
               ...,
               [ 0.0000e+00,  0.0000e+00],
               [ 5.9605e-08,  0.0000e+00],
               [ 0.0000e+00,  0.0000e+00]],
    
              [[ 5.9605e-08,  0.0000e+00],
               [ 0.0000e+00,  0.0000e+00],
               [ 5.9605e-08,  0.0000e+00],
               ...,
               [ 0.0000e+00,  0.0000e+00],
               [ 0.0000e+00,  0.0000e+00],
               [ 0.0000e+00,  0.0000e+00]]], device='cuda:0', dtype=torch.float16), 2097152, 3, 2, 16, 0.4666666666666667, 16, None, 0, False, 0
        0% 0/100 [00:00<?, ?it/s]

Steps to Reproduce

python main_nerf.py data/nerf_synthetic/lego --workspace trial_nerf -O --bound 1.0 --scale 0.8 --dt_gamma 0

Expected Behavior

ust solve this confusing error

Environment

torch version 1.11.0+cu113 torchvision '0.12.0+cu113'

njustephencurry commented 8 months ago

Solution:Reinstall gridencoder with 'pip install ./gridencoder'