While training on the blender dataset, it seems like this line
optimizer = torch.optim.Adam(params=grad_vars, lr=args.lrate, betas=(0.9, 0.999)) in the file nerf_vth2.py has a bug.
After adding grad_vars = [var[1] for var in grad_vars if isinstance(var[1], torch.Tensor)] before the line mentioned, it works.
But I do not know the reason and I do not know whether this change will affect the result. It seems like the original pytorch-nerf has the similar code around here.
Traceback (most recent call last):
File "nerf_vth2.py", line 961, in
train()
File "nerf_vth2.py", line 765, in train
render_kwargs_train, render_kwargs_test, start, grad_vars, optimizer = create_nerf(args)
File "nerf_vth2.py", line 299, in create_nerf
optimizer = torch.optim.Adam(params=grad_vars, lr=args.lrate, betas=(0.9, 0.999))
File "/home/iccd/anaconda3/envs/SpikingNerf/lib/python3.8/site-packages/torch/optim/adam.py", line 33, in init
super().init(params, defaults)
File "/home/iccd/anaconda3/envs/SpikingNerf/lib/python3.8/site-packages/torch/optim/optimizer.py", line 192, in init
self.add_param_group(param_group)
File "/home/iccd/anaconda3/envs/SpikingNerf/lib/python3.8/site-packages/torch/optim/optimizer.py", line 512, in add_param_group
raise TypeError("optimizer can only optimize Tensors, "
TypeError: optimizer can only optimize Tensors, but one of the params is tuple
While training on the blender dataset, it seems like this line
optimizer = torch.optim.Adam(params=grad_vars, lr=args.lrate, betas=(0.9, 0.999))
in the file nerf_vth2.py has a bug. After addinggrad_vars = [var[1] for var in grad_vars if isinstance(var[1], torch.Tensor)]
before the line mentioned, it works. But I do not know the reason and I do not know whether this change will affect the result. It seems like the original pytorch-nerf has the similar code around here.