YuliangXiu / ICON

[CVPR'22] ICON: Implicit Clothed humans Obtained from Normals
https://icon.is.tue.mpg.de
Other
1.59k stars 217 forks source link

SCANimate question #17

Closed LiXiangDeng closed 2 years ago

LiXiangDeng commented 2 years ago

您好,请问您在论文中提到的将此model用于SCNimate可以得到一个有纹理且可动画的avater,这个是怎么做到的啊,您有兴趣说一下吗?

YuliangXiu commented 2 years ago
  1. do reconstruction frame-by-frame on a video, then we have inside SMPL fits and clothed mesh sequences
  2. only keep the vertices that are visible (with textures projected from monocular image) from the camera view
  3. now you have all the ingredients to learn the SCANimate, learned skinning weights and pose-dependent implicit fields are SCANimate's outputs, now you have an animatable avatar with fully texture

That's all.

LiXiangDeng commented 2 years ago

谢谢大佬,我想请问下,如果我使用多视图的照片的话,应该怎样将他融合成为一个模型呢,就是输入多张图片,得到一个有纹理的模型,然后还有一个小问题clolab demo,最后生成视频那里,最后的结果好像生成不了,是FILE not found xxx.video,打扰您了,我刚入门这个方向,想先跑起来一个输入图片得到带纹理的人体模型

YuliangXiu commented 2 years ago

ICON doesn't support multi-view inputs for now, maybe later. But in theory, ICON can support multi-view in the same way as PIFu[1] and Transformer-PIFu [2], using simple mean (PIFu [1]) or more adanved transformer (Transformer-PIFu [2]) to fuse multiple implicit features from different views for more accurate surface regression.

As for the colab, you need to wait for a while in # run the test on examples section to generate the final video mp4 file.

[1] Saito, Shunsuke, et al. "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019. [2] Zins, Pierre, et al. "Data-Driven 3D Reconstruction of Dressed Humans From Sparse Views." 2021 International Conference on 3D Vision (3DV). IEEE, 2021.

flymin commented 2 years ago
  1. do reconstruction frame-by-frame on a video, then we have inside SMPL fits and clothed mesh sequences
  2. only keep the vertices that are visible (with textures projected from monocular image) from the camera view
  3. now you have all the ingredients to learn the SCANimate, learned skinning weights and pose-dependent implicit fields are SCANimate's outputs, now you have an animatable avatar with fully texture

That's all.

Hi, I am also trying to combine ICON and SCANimate, and have some questions.

  1. Could you provide more detail about "only keep the vertices that are visible"? Does it mean I need to remove some vertices from the predicted mesh and modify SCANimate to adapt to this?
  2. I notice that the reconstructed mesh from ICON is very unsmooth. Do you have any suggestions to optimize the results? image
  3. Using the meshes like above, I try to optimize SCANimate directly. However, the outcome with poses from the training set is strange, like below. I suppose this is due to the reconstruction results. Could you please give me some suggestions? image

Thank you in advance.

YuliangXiu commented 2 years ago
  1. Yes, you need to remove all the invisible faces (vertices w/o color, query_color) and put these partial meshes into SCANimate
  2. Use icon-filter or use smooth mesh
  3. Run SCANimate for more epochs (my setting: num_epoch_pt1: 150, num_epoch_pt2: 150, num_epoch_sdf: 4000), and use more scans with various poses, same shape for every person
flymin commented 2 years ago

Thank you! I will try it out.

xiegongsheldon commented 2 years ago

do you know how to get T-pose from the project? seek for your guidance

YuliangXiu commented 2 years ago

do you know how to get T-pose from the project? seek for your guidance

SCANimate is all you need, you will find the related functions to repose posed scan to T-pose.

xiegongsheldon commented 2 years ago

谢谢!邮件已收到。

xiegongsheldon commented 2 years ago

repose posed scan to T-pose

can you tell me which function in SCANimate That can do that work. because SCANmate train work need T-pose SMPL thank you very much

xiegongsheldon commented 2 years ago

Thank you! I will try it out.

hello are you success in SCANmate trainning?

flymin commented 2 years ago

hello are you success in SCANmate trainning?

I think SCANimate needs a minimal body, posed meshes and pose parameters. I use the SMPL model as the minimal clothed body. Only the beta parameters are needed. I add the code as follows in infer.py.

smpl_out = dataset.smpl_model(betas=betas.cuda())
vert = smpl_out.vertices[0].cpu()
faces = dataset.smpl_model.faces
smpl_obj = trimesh.Trimesh(vert, faces, process=False, maintains_order=True)
smpl_obj.export(f"{args.out_dir}minimal.ply")

However, I am not sure whether this is the correct solution. The result is still unsatisfactory.

xiegongsheldon commented 2 years ago
betas=betas.cuda()

yes thank you for your reply, my result is also unsatisfactory, maybe it is because of the number of the sample is not enough to train SCANmate?

Bill-WangJiLong commented 1 year ago

May I ask how you obtained the npz file? I noticed that Icon exports an npy file, are the two the same?

您好,您在 SCANmate 培训方面成功了吗?

我认为 SCANimate 需要一个最小的身体、摆姿势网格体和姿势参数。我使用 SMPL 模型作为最小的穿衣身体。只需要测试版参数。我添加代码如下 infer.py.

smpl_out = dataset.smpl_model(betas=betas.cuda())
vert = smpl_out.vertices[0].cpu()
faces = dataset.smpl_model.faces
smpl_obj = trimesh.Trimesh(vert, faces, process=False, maintains_order=True)
smpl_obj.export(f"{args.out_dir}minimal.ply")

但是,我不确定这是否是正确的解决方案。结果仍然不能令人满意。

May I ask how you obtained the npz file? I noticed that Icon exports an npy file, are the two the same?

xiegongsheldon commented 1 year ago

谢谢!邮件已收到。

Bill-WangJiLong commented 1 year ago

Thank you! I will try it out.

Hello, how do you convert the smpl parameter to the format required by SCANimate? I noticed that not only the format is different, but also the array size. It seems that SCANimate needs 2333, but ICON saves 23* 6. Do you need to change this aspect? thank you

YuliangXiu commented 1 year ago

Thank you! I will try it out.

Hello, how do you convert the smpl parameter to the format required by SCANimate? I noticed that not only the format is different, but also the array size. It seems that SCANimate needs 23_3_3, but ICON saves 23* 6. Do you need to change this aspect? thank you

Please check rotation_converter.py to convert the angles as you need.

Bill-WangJiLong commented 1 year ago

Thank you! I will try it out.

Hello, how do you convert the smpl parameter to the format required by SCANimate? I noticed that not only the format is different, but also the array size. It seems that SCANimate needs 23_3_3, but ICON saves 23* 6. Do you need to change this aspect? thank you

Please check rotation_converter.py to convert the angles as you need.

I try it as this picture image

however, the smpl parameter is wrong.I use render picture from cape3view and run infer.py. There is a relatively large gap between the generated smpl parameters and the gt parameters of the cape dataset. image image

Is it a problem with my code? Or is there something else I haven't modified. Or is this just a normal error in estimating SMPL parameters based on HPS? Please give a little hint, thank you very much!