nianticlabs / diffusionerf

[CVPR 2023] DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models
MIT License
287 stars 15 forks source link

Questions about the DDM prior #1

Closed GuangyuWang99 closed 1 year ago

GuangyuWang99 commented 1 year ago

Hi Daniyar,

Thanks for your excellent work! Here I have some questions about the DDM prior.

As you mentioned in the main paper, the DDM is trained on RGBD patches of Hypersim dataset, to model a prior over the distribution of RGBD patches. During training, the gradients from the DDM is propagated to both of the depth and color of the rendered patches with different weights.

I wonder how do you formulate the DDM? Does the DDM simply take as inputs the concatenation of RGB & D in the channel dimension? or using a depth2img DDM to make the gradient propagate through the conditional branch? Can you share your code or supplementary material?

Thanks in advance!

jamiewynn commented 1 year ago

Hi there,

Thanks for your interest in our work. Exactly as you said, the DDM takes as input the concatenation of RGB and D images; as far as the diffusion model is concerned, the depth is 'just another channel' (although see my answer to this question for some details about how we preprocess the depth before feeding it to the model).

We're intending to release our code some time in the next few months.

daniyar-niantic commented 1 year ago

Thank you for the interest in the paper!

The code and supplementary pdf are available from the repo, please feel free to ask any clarifying questions in a new issue.