nv-tlabs / GET3D

Other
4.17k stars 374 forks source link

About the "background_feature" for texture image #148

Closed ElmoShim closed 6 months ago

ElmoShim commented 9 months ago

Hello, thank you for your great work!

I am trying to implement something based on your code, and found a piece of code that I don't understand

in DMTETSynthesisNetwork.generate function:

def generate():
.
.
.
        background_feature = torch.zeros_like(tex_feat)

        # Merge them together
        img_feat = tex_feat * tex_hard_mask + background_feature * (1 - tex_hard_mask)
.
.
.

but isn't background_feature * (1 - tex_hard_mask) zero, since background_feature is zero tensor?

Is there any specific reason that img_feat is not simply

img_feat = tex_feat * tex_hard_mask

?

ElmoShim commented 9 months ago

I removed the background_feature part and the model got harder to converge.

does it somehow affects gradients or something?

Bathsheba commented 9 months ago

I'm not a dev, but I would guess the merge expression is defensive coding: this way if background_feature was not 0 it would still work correctly. E.g. if the empty value of tex_feat were not 0, or if a noisy background were introduced. I agree it isn't needed here and now, but I personally wouldn't change it. It's not an expensive calculation, and if you were hacking up this code, someday it might save your bacon.

I don't know whether it would affect the gradients, or if your test was just unlucky.

SteveJunGao commented 6 months ago

Thank you @Bathsheba for explaining! Yes, I write the code in this case just to avoid bugs, in case we're training with a different background (e.g. a white background)