Closed foolhard closed 1 year ago
Hi, I'm not sure I understand your question. We do use L1 loss for training, as specified here. One potential confusion to clarify is that we manually compute the gradient of the loss function with respect to the density grid (which is grad_sigma
here). You can find that implemented here. With the current code, only L1, L2 and AbsRel losses can be used to train the model. If you need to train with any other loss, I can provide a Differentiable Voxel Rendering layer in PyTorch that does not require manual gradient computation but has a huge memory footprint.
Hi, thanks for the explanation. I want a Differential Voxel Rendering layer in PyTorch to try it more flexibly. Could you please share it?
@tarashakhurana Referred to your previous repo here, I implement a Differential Voxel Rendering layer and it works.
However, I still want your code as a reference. Thanks a lot.
Thanks for the reminder! I have added it now: https://github.com/tarashakhurana/4d-occ-forecasting/tree/main#new-differentiable-voxel-rendering-implemented-as-a-layer-in-pytorch
It worked. Thanks a lot.
Hello @tarashakhurana ,
I want to use L1 loss to train the network as your mentioned in your paper. But in this repo, L1 loss is not used.
Could you provide the code for using L1 loss for training?