Closed JiuTongBro closed 4 months ago
Sorry for my carelessness. For Q1, the normal is computed from the rendered depth rather than GT inpainted depth😀. I only have the second question now.
Hi, rays_inp is filtered by masks. So, we only compute the depth loss within unmasked regions.
Thanks for your reply. But I can't find where it is filtered. Could you please show the line to me in run.py
? Thanks.
Hi, I will check the code tomorrow and let you know.
Thanks.
Hi, I accidentally deleted some code while cleaning up the code. Many thanks for your kind reminder! I have fixed this problem (see line 713)
Thanks for your effort!
1
5
Hi. Thanks for your impressive work!
However, I have a few questions about how to run the code on my own data.
As mentioned in the paper, the surface normal for SDS computation is no more calculated from the density field, but from the inpainted depth. So it seems that, aside from the RGB images, 3D poses, 2D masks, and text description, we still need to prepare the inpainted depth for all the cases to compute the normal. The inpainted depth, is also an essential input to the model, as it is required for the normal computation.
Therefore, I wonder, how do you get the inpainted depth? Especially for those 'generation of novel content' cases presented in supplementary. Do you just obtain a new, coarse object, through common SDS, and render the depth as the 'inpainted depth', and re-train a model using those depth?
rays_inp
, but therays_inp
is not filtered by masks. Consequently, the depth reconstruction loss, is now computed on the whole image (both masked and unmasked region). I wonder do we need to adjust the code to filter therays_inp
to be within the unmasked rays.Thanks!