ShirleyMaxx / VirtualMarker

[CVPR 2023] Offical Pytorch implementation of "3D Human Mesh Estimation from Virtual Markers"
Apache License 2.0
257 stars 27 forks source link

can I get the SMPL parameters of result? #13

Closed Thunderltx closed 1 week ago

Thunderltx commented 1 year ago

Thanks for your great work!

I run the demo, got the result video with Mesh, so I have one question: can I get the SMPL joints parameters of the result?

ShirleyMaxx commented 1 year ago

Theoretically yes, and there are many ways.

  1. The SMPL parameters can be optimized by minimizing the mesh obtained by the optimized SMPL parameter and the predicted mesh.
  2. Or a simple MLP can be trained to learn the mapping from mesh vertices to SMPL parameters.

We have tried both of the methods earlier and the error of the mesh corresponding to the obtained SMPL parameter is similar to the error of the original estimate.

John-Yao commented 1 year ago

@ShirleyMaxx Hi, could you provide more details about the first method.,such as optim ,loss and how much time cost.

jack121388 commented 1 year ago

感谢您的杰出工作!

我运行了演示,用Mesh得到了结果视频,所以我有一个问题:我可以获得结果的SMPL关节参数吗?

感谢您的杰出工作!

我运行了演示,用Mesh得到了结果视频,所以我有一个问题:我可以获得结果的SMPL关节参数吗?

Hi, can I ask, I used his default input directly. Then run sh command/simple3dmesh_infer/baseline.sh. Then I went to the output directory and found that the model just re-exported the original video without the demo effect. Is there a step I missed?

Luke-Luo1 commented 1 year ago

感谢您的杰出工作! 我运行了演示,用Mesh得到了结果视频,所以我有一个问题:我可以获得结果的SMPL关节参数吗?

感谢您的杰出工作! 我运行了演示,用Mesh得到了结果视频,所以我有一个问题:我可以获得结果的SMPL关节参数吗?

Hi, can I ask, I used his default input directly. Then run sh command/simple3dmesh_infer/baseline.sh. Then I went to the output directory and found that the model just re-exported the original video without the demo effect. Is there a step I missed?

It seems like you have problems in the step of installing of render env.

Luke-Luo1 commented 1 year ago

Theoretically yes, and there are many ways.

1. The SMPL parameters can be optimized by minimizing the mesh obtained by the optimized SMPL parameter and the predicted mesh.

2. Or a simple MLP can be trained to learn the mapping from mesh vertices to SMPL parameters.

We have tried both of the methods earlier and the error of the mesh corresponding to the obtained SMPL parameter is similar to the error of the original estimate.

Firstly, congratulaitons for your excellent work. I have a question that I've tried MLP method on the example video, but the best loss result is still around 9 with RMSE. Therefore I'm also curious about the MLP method that you could achieve appreciable result.

ShirleyMaxx commented 1 year ago

@ShirleyMaxx Hi, could you provide more details about the first method.,such as optim ,loss and how much time cost.

Hi, @John-Yao sorry that I could not provide the whole fitting code, but the core code is below, hoping it will help.

import pickle
import torch
import torch.nn as nn
from virtualmarker.core.config import cfg, update_config, init_experiment_dir
from virtualmarker.utils.smpl_utils import SMPL

# initialize SMPL layer
smpl_layer = SMPL(
    osp.join(cfg.data_dir, 'smpl'),
    batch_size=1,
    create_transl=False,
    gender = 'neutral')
J_regressor = torch.Tensor(smpl_layer.J_regressor_h36m).cuda()

# initialize SMPL parameters
with open('t_pose.pkl', 'rb') as f:
    init_t_pose = pickle.load(f)            # (72,) load initial pose parameter (T-pose) for faster convergence
pose_params = init_t_pose.expand(batch_size, 72).cuda()   # (batch_size, 72)
shape_params = torch.zeros(batch_size, 10).cuda()                 # (batch_size, 10)
pose_params.requires_grad = shape_params.requires_grad = True

# set up optimizer
optimizer = torch.optim.Adam([pose_params, shape_params], lr = 1e-2)

# start fitting
max_iters = 10000   # fitting iterations

pred_pose3d   # the predicted 3d pose (batch_size, J=24, 3) 
pred_mesh     # the predicted mesh vertices (batch_size, V=6890, 3)

for _ in range(max_iters):
    fitted_mesh = smpl_layer.cuda()(pose_params, shape_params)[0]
    fitted_pose = torch.matmul(J_regressor, fitted_mesh)

    joint3d_loss = nn.MSELoss()(fitted_pose, pred_pose3d)
    mesh3d_loss = nn.MSELoss()(fitted_mesh, pred_mesh)

    loss = cfg.loss.loss_weight_joint3d * joint3d_loss + cfg.loss.loss_weight_mesh3d * mesh3d_loss

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

# get the final fitted SMPL parameters
fitted_pose_params = pose_params
fitted_shape_params = shape_params
ShirleyMaxx commented 1 year ago

感谢您的杰出工作! 我运行了演示,用Mesh得到了结果视频,所以我有一个问题:我可以获得结果的SMPL关节参数吗?

感谢您的杰出工作! 我运行了演示,用Mesh得到了结果视频,所以我有一个问题:我可以获得结果的SMPL关节参数吗?

Hi, can I ask, I used his default input directly. Then run sh command/simple3dmesh_infer/baseline.sh. Then I went to the output directory and found that the model just re-exported the original video without the demo effect. Is there a step I missed?

Could you please check whether the rendering environment using the package pyrender is successfully installed? You could follow the step in Quick demo.

ShirleyMaxx commented 1 year ago

Theoretically yes, and there are many ways.

1. The SMPL parameters can be optimized by minimizing the mesh obtained by the optimized SMPL parameter and the predicted mesh.

2. Or a simple MLP can be trained to learn the mapping from mesh vertices to SMPL parameters.

We have tried both of the methods earlier and the error of the mesh corresponding to the obtained SMPL parameter is similar to the error of the original estimate.

Firstly, congratulaitons for your excellent work. I have a question that I've tried MLP method on the example video, but the best loss result is still around 9 with RMSE. Therefore I'm also curious about the MLP method that you could achieve appreciable result.

Thank you for your kind words.

Maybe you could try to initialize the pose parameters using the T-pose as in the above code snippet.