Lakonik / SSDNeRF

[ICCV 2023] Single-Stage Diffusion NeRF
https://lakonik.github.io/ssdnerf/
MIT License
432 stars 23 forks source link

question about inverse_code function #22

Closed bring728 closed 1 year ago

bring728 commented 1 year ago

Hi! Thank you for sharing great work!

I have a question about this function. https://github.com/Lakonik/SSDNeRF/blob/3e50d1d9287d92ae40b5831fd6933ac64e125577/lib/models/autodecoders/base_nerf.py#L401C16-L491 What is this function training for?

As I understand it, there are 3 things in SSDNeRF Diffusion U-Net triplane (code) MLP decoder and in this function, it is expected to train triplane code and MLP decoder.

First of all, this function doesn't seem to learn the MLP decoder because of https://github.com/Lakonik/SSDNeRF/blob/3e50d1d9287d92ae40b5831fd6933ac64e125577/lib/models/autodecoders/base_nerf.py#L412

In this function, the rendering loss is calculated and the triplane code is updated, but the gradient of the rendering loss is overwritten due to the prior gradient caching you mentioned in the paper, https://github.com/Lakonik/SSDNeRF/blob/3e50d1d9287d92ae40b5831fd6933ac64e125577/lib/models/autodecoders/base_nerf.py#L456-L467 so the rendering loss doesn't seem to be affected at all.

If this part https://github.com/Lakonik/SSDNeRF/blob/3e50d1d9287d92ae40b5831fd6933ac64e125577/lib/models/autodecoders/base_nerf.py#L412 is True and there is a decoder optimizer.step(), I would be convinced, but I don't think so.

Am I misunderstanding something about prior gradient caching?

bring728 commented 1 year ago

I was mistaken. It is correct to learn triplane code for this function.