YuliangXiu / ICON

[CVPR'22] ICON: Implicit Clothed humans Obtained from Normals
https://icon.is.tue.mpg.de
Other
1.59k stars 218 forks source link

Problem about SMPL refining loss. #74

Closed SongYupei closed 2 years ago

SongYupei commented 2 years ago

Many thanks to the author for his work. I found a question in the process of reading papers and code. The paper introduces in the Refining SMPL section that the results of SMPL modeling can be iteratively optimized during the inference process. The loss function includes two parts, the L1 difference between the unclothed normal map and the normal map of the model prediction results, and The L1 difference between the mask of the smpl normal map and the mask of the original image, but you did not implement the corresponding implementation in the code. What is the reason for this? Is the existing code implementation more efficient than the original implementation?

            # silhouette loss
            smpl_arr = torch.cat([T_mask_F, T_mask_B], dim=-1)[0]       # smpl mask maps
            gt_arr = torch.cat(                                         # clothed normal maps
                [in_tensor['normal_F'][0], in_tensor['normal_B'][0]],
                dim=2).permute(1, 2, 0)
            gt_arr = ((gt_arr + 1.0) * 0.5).to(device)
            bg_color = torch.Tensor([0.5, 0.5,
                                     0.5]).unsqueeze(0).unsqueeze(0).to(device)
            gt_arr = ((gt_arr - bg_color).sum(dim=-1) != 0.0).float()
            diff_S = torch.abs(smpl_arr - gt_arr)
            losses['silhouette']['value'] = diff_S.mean()
YuliangXiu commented 2 years ago

normal diff and silhouette diff are in L216-L234

SongYupei commented 2 years ago

So, is this a dissertation revision error? Do you actually trust the output result of the normal map after wearing clothes, and directly use this result to optimize the construction of the SMPL model?

YuliangXiu commented 2 years ago

Yes, you could check the details from ICON's paper.

SongYupei commented 2 years ago

OK, I know. Another good solution is to increase the constraints of pose points? Although it may increase the computational time of inference, pose constraints can better optimize the pose parameters of SMPL.

SongYupei commented 2 years ago

Maybe you can use Openpose related work as another module.

YuliangXiu commented 2 years ago

Of course, given 2D keypoints (OpenPose, MediaPipe, AlphaPose) or even the semantic parsing results, the refinement process will be further improved for sure. If you are interested in adding the keypoints constrain into ICON, any pull requests are welcome.

YuliangXiu commented 2 years ago

@SongYupei New cloth-refinement module is released. Use -loop_cloth 200 to refine ICON's reconstruction, making it as good as the predicted clothing normal image. overlap