Open amir90 opened 4 months ago
problem was solved by rendering the images at 512 X 512 resolution. But now, I am getting a crash during the instant-ngp training at random epoches (but no more than 50) when training on 1000 views per epoch. I am training on a A6000 and the reported memory usage never goes above 6k VRAM during training, so its far from OOM.
/code/training/train.py", line 177, in train_step outputs = self.model.render(rays_o, rays_d, staged=False, bg_color= bg_color, perturb=True, force_all_rays=True)
/code/models/renderer.py", line 672, in render results = _run(rays_o, rays_d, **kwargs)
/code/models/renderer.py", line 487, in run_cuda sigmas, rgbs, normals = self(xyzs, dirs, light_d, ratio=ambient_ratio, shading=shading) ...
site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias)
RuntimeError: CUDA error: invalid configuration argument
Hello, I am getting the the following error when trying to train a base nerf with view data generated using the Blender NeRF addon:
RuntimeError: The size of tensor a (512) must match the size of tensor b (1920) at non-singleton dimension 2
this happens in the call to self.eval_step(data) in train.py
I am not sure what is wrong with the view data, can you add to the repo the view data used for the cat demo as a reference?
Thanks!