tarashakhurana / 4d-occ-forecasting

CVPR 2023: Official code for `Point Cloud Forecasting as a Proxy for 4D Occupancy Forecasting'
https://www.cs.cmu.edu/~tkhurana/ff4d/index.html
MIT License
212 stars 22 forks source link

Loss function - How can I use L1 loss train the network? #7

Closed foolhard closed 1 year ago

foolhard commented 1 year ago

Hello @tarashakhurana ,

I want to use L1 loss to train the network as your mentioned in your paper. But in this repo, L1 loss is not used.

Could you provide the code for using L1 loss for training?

image

tarashakhurana commented 1 year ago

Hi, I'm not sure I understand your question. We do use L1 loss for training, as specified here. One potential confusion to clarify is that we manually compute the gradient of the loss function with respect to the density grid (which is grad_sigma here). You can find that implemented here. With the current code, only L1, L2 and AbsRel losses can be used to train the model. If you need to train with any other loss, I can provide a Differentiable Voxel Rendering layer in PyTorch that does not require manual gradient computation but has a huge memory footprint.

foolhard commented 1 year ago

Hi, thanks for the explanation. I want a Differential Voxel Rendering layer in PyTorch to try it more flexibly. Could you please share it?

foolhard commented 1 year ago

@tarashakhurana Referred to your previous repo here, I implement a Differential Voxel Rendering layer and it works.

However, I still want your code as a reference. Thanks a lot.

tarashakhurana commented 1 year ago

Thanks for the reminder! I have added it now: https://github.com/tarashakhurana/4d-occ-forecasting/tree/main#new-differentiable-voxel-rendering-implemented-as-a-layer-in-pytorch

foolhard commented 1 year ago

It worked. Thanks a lot.