I'm using PyTorch3D to get the silhouette from a 3D human model so that I'm able to optimize the human model parameters (e.g. poses and global translation) over a loss term on the predicted silhouette and the ground truth silhouette.
The issue is that the gradient of the loss w.r.t. the optimized parameters (thetas and trans) is equal to zero all the time. I checked the computation graph and the gradient of the loss w.r.t. the silhouettes is non-zero but after that, the gradient will stay zero. I guess this is caused by either the mapping from mask_mesh to mask or from smpl_output.vertices to mask_mesh because the mapping from the parameters to smpl_output is already proven to be differentiable.
Could someone give a solution to this problem? Thanks a lot in advance!
Here is the code snippet (just the relevant part):
Hi all,
I'm using PyTorch3D to get the silhouette from a 3D human model so that I'm able to optimize the human model parameters (e.g. poses and global translation) over a loss term on the predicted silhouette and the ground truth silhouette.
The issue is that the gradient of the loss w.r.t. the optimized parameters (
thetas
andtrans
) is equal to zero all the time. I checked the computation graph and the gradient of the loss w.r.t. thesilhouettes
is non-zero but after that, the gradient will stay zero. I guess this is caused by either the mapping frommask_mesh
tomask
or fromsmpl_output.vertices
tomask_mesh
because the mapping from the parameters tosmpl_output
is already proven to be differentiable.Could someone give a solution to this problem? Thanks a lot in advance!
Here is the code snippet (just the relevant part):
The loss is just the naive squared error:
mask_loss = torch.sum((silhouettes - gt_silhouettes)**2)