YuliangXiu / ICON

[CVPR'22] ICON: Implicit Clothed humans Obtained from Normals
https://icon.is.tue.mpg.de
Other
1.61k stars 220 forks source link

About the .obj (3D meshes) generated #25

Closed guayabas closed 2 years ago

guayabas commented 2 years ago

Is it possible to obtain the exact 3D mesh highlighted in yellow of the 1st attached image?

Browsing the code, I can see the 3 default meshes being saved in the obj folder for a given test run (xxx_smpl.obj, xxx_recon.obj, xxx_refine.obj). My assumption was that given the cloth-norm (recon) tag in the image reflects the xxx_recon.obj then cloth-norm (pred) should reflect the xxx_refine.obj but seems that is not the case. 2nd screenshot shows the 3 generated meshes and looks that xxx_recon and xxx_refine differ indeed but not as noticeable as in the image (in the code the deform_verts variable is being added to the default verts_ptr but that displacement results only in some kind of artifacts -or here is where I should tune some values to have better 3D mesh resolution?-, similar behavior is shown with the repo test images)

I also thought that increasing the marching cubes resolution/values should I be able to obtain the 3D mesh that can match the cloth-norm (pred) image but after following more of your code seems that also the computation of the sdf function is relevant (not really an expert of generating those functions and that is where I started to get lost)

I might be misinterpreting ICON but my objective is just to know if one can get the 3D mesh from the very detailed normal map from the prediction (or get a very close approximation).

Screenshot 2022-03-06 194939

Screenshot 2022-03-06 194834

Thanks again for all the help so far and awesome work (:

YuliangXiu commented 2 years ago

Let me explain in detail:

For now, the predicted normal image always looks better than the final reconstructed mesh, but unfortunately, the refine.obj is NOT the mesh in the yellow box. This is the drawback of ICON, but I am still trying to push the quality of ICON as close to the predicted normal image as possible.

It should work as you expected, for example, PIFuHD[1] (another method using implicit function) could recover very fine details as long as the predicted normal image is good. I am still debugging to see what's wrong with my implementation, will let you know for latest progress.

[1] Saito, Shunsuke, et al. "PIFuHD: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

guayabas commented 2 years ago

Great answer. Indeed, the sdf idea came from comparing the ICON result with PIFuHD. Looking forward for the next iteration of the ICON reconstruction (:

YuliangXiu commented 2 years ago

@guayabas New cloth-refinement module is released. Use -loop_cloth 200 to refine ICON's reconstruction, making it as good as the predicted clothing normal image. overlap