taconite / arah-release

[ECCV 2022] ARAH: Animatable Volume Rendering of Articulated Human SDFs
https://neuralbodies.github.io/arah/
MIT License
185 stars 15 forks source link

Some bug in ray_tracing.py #19

Closed lingtengqiu closed 1 year ago

lingtengqiu commented 1 year ago

https://github.com/taconite/arah-release/blob/ef407178b00c28382457419b12e53155a839dc10/im2mesh/metaavatar_render/renderer/ray_tracing.py#L249-L260

Hi, as the above line illustrates, the L-252, valuable 'eval' is not defined in this function scope, hence ~diverge_mask if eval else torch.ones_like(diverge_mask) is always true(it means ~diverge_mask either train or eval phase). I do not know is a bug of the code, or you make it True always.

taconite commented 1 year ago

Thanks for the notice! Indeed the eval input to sphere_tracing was accidentally removed from my pre-release version during code cleaning. I will push a fix about this along with other minor updates this week.

Overall, the reason for setting diverge_mask to ones during training is that in rare cases, some diverged rays (based on SMPL skinning) could be recovered by search_iso_surface_depth. Though at this point it seems that such these rare rays does not affect the final result in any significant way.

lingtengqiu commented 1 year ago

Can this code be used in A-100/ 3090? As I see the issue that broyden cannot be used in RTX-3090(IMAvatar and SNARF); I do not know whether your code can work on these devices.

taconite commented 1 year ago

I did not test my code on these devices but most likely it will have the same issue. Or maybe it was a driver-related issue and update to latest driver could fix it.

lingtengqiu commented 1 year ago

Thanks for your detailed explanation. I am also confused about differentiable root-finding. I saw SNARF/IMAvatar computes correction terms for implicit differentiation during training as:

        xc_opt = xc_opt.detach()

        # reshape to [B,?,D] for network query
        n_batch, n_point, n_init, n_dim = xc_init.shape
        xc_opt = xc_opt.reshape((n_batch, n_point * n_init, n_dim))

        xd_opt = self.forward_skinning(xc_opt, cond, tfs)

        grad_inv = self.gradient(xc_opt, cond, tfs).inverse()

        correction = xd_opt - xd_opt.detach()
        correction = einsum("bnij,bnj->bni", -grad_inv.detach(), correction)

        # trick for implicit diff with autodiff:
        # xc = xc_opt + 0 and xc' = correction'
        xc = xc_opt + correction

However, I do not find the correction term in your code to make root-finding differentiable. Do you make root-finding differentiable ?

taconite commented 1 year ago

The correction term is added here

lingtengqiu commented 1 year ago

Thanks!!!