Closed SongYupei closed 2 years ago
normal diff and silhouette diff are in L216-L234
So, is this a dissertation revision error? Do you actually trust the output result of the normal map after wearing clothes, and directly use this result to optimize the construction of the SMPL model?
Yes, you could check the details from ICON's paper.
OK, I know. Another good solution is to increase the constraints of pose points? Although it may increase the computational time of inference, pose constraints can better optimize the pose parameters of SMPL.
Maybe you can use Openpose related work as another module.
Of course, given 2D keypoints (OpenPose, MediaPipe, AlphaPose) or even the semantic parsing results, the refinement process will be further improved for sure. If you are interested in adding the keypoints constrain into ICON, any pull requests are welcome.
@SongYupei New cloth-refinement module is released. Use -loop_cloth 200
to refine ICON's reconstruction, making it as good as the predicted clothing normal image.
Many thanks to the author for his work. I found a question in the process of reading papers and code. The paper introduces in the Refining SMPL section that the results of SMPL modeling can be iteratively optimized during the inference process. The loss function includes two parts, the L1 difference between the unclothed normal map and the normal map of the model prediction results, and The L1 difference between the mask of the smpl normal map and the mask of the original image, but you did not implement the corresponding implementation in the code. What is the reason for this? Is the existing code implementation more efficient than the original implementation?