hbb1 / torch-splatting

A pure pytorch implementation of 3D gaussian Splatting
296 stars 25 forks source link

Bug? #13

Open yuedajiong opened 6 months ago

yuedajiong commented 6 months ago
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.],
      [0., 0., 0.,  ..., 0., 0., 0.]]]], device='cuda:0')

loss tensor(1.0310, device='cuda:0') Traceback (most recent call last): File "/root/superi/superv.py", line 886, in main() File "/root/superi/superv.py", line 882, in main train(image_base=image_base, image_size=image_size, batch_size_train=2, batch_size_valid=1, epochs=20, checkpoint_path=os.path.join(work_path,checkpoint_base_path), checkpoint_best_file=os.path.join(work_path,checkpoint_base_path, checkpoint_best_name), device=device) File "/root/superi/superv.py", line 805, in train loss.backward() File "/opt/anaconda3/envs/superi/lib/python3.12/site-packages/torch/_tensor.py", line 522, in backward torch.autograd.backward( File "/opt/anaconda3/envs/superi/lib/python3.12/site-packages/torch/autograd/init.py", line 266, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn