ZhanxyR / SHERT

[CVPR'24 Oral] Official Pytorch implementation for Semantic Human Mesh Reconstruction with Textures.
https://zhanxy.xyz/projects/shert/
MIT License
62 stars 4 forks source link

How to get the uv information on my own example #7

Open liaochenchieh opened 2 weeks ago

liaochenchieh commented 2 weeks ago

Hi, thanks a lot for sharing this wonderful project! I have tried out my single-image example with estimated SMPL obj files from ECON. Now, when I want to use some visualization tool (I am using Blender) to see the generated results, I find it hard to apply the partial_tex texture image to the smplx_d2/dmplx_star object, mainly because the object does not contain the UV information. I am pretty new to this area and don't know how to handle the UV part well. Could you provide some tips for solving this problem? Thank you very much!

ZhanxyR commented 2 weeks ago

Hi, you can load the obj file and rewrite it with save_mtl to make it contain the UV information. Or you can use any other method to replace the vts and faces of the .obj file. The UV information can also be found in the partial_colored.obj.

liaochenchieh commented 2 weeks ago

@ZhanxyR Hi, I just tried out the save_mtl function, and it is exactly what I need for the UV mapping! Thanks a lot again for the solution!

ZhanxyR commented 2 weeks ago

You're welcome, and I'll close this issue. If you have any other questions, feel free to ask.

liaochenchieh commented 2 weeks ago

@ZhanxyR Hi, thanks for the previous solution. I am currently having an issue regarding exporting FBX / rigging the SMPL model using the estimated results we have. I previously tried the official smpl-x blender add-on, but I think it cannot import the model from my parameters. I know it might not be the focus of this project, but would you happen to have any insight about how we can generate a rigged model (for example, FBX that can be animated in applications like VR), given the estimated parameters we have in this project?

ZhanxyR commented 2 weeks ago

Unfortunately, I was not familiar with fbx file structure. :( But we use this function to subdivide the original SMPLX model and remove the eye balls. Thus it changed from (vertices - 10475, faces - 20908) to (vertices - 149921, faces - 299712). We also regenerate the skinning weights, which is saved at data/skinning_weights/lbs_weights_divide2.npy. This issue will be reopened and I hope it can help you.

liaochenchieh commented 1 week ago

@ZhanxyR Hi, thanks for reopening the issue. Since rigging a new object is relatively hard, I am now considering using the original SMPL or SMPL-X (which is rigged), which is easy to control in real-time for my application. Therefore, my goal is to generate a color texture map that can fit the SMPL (or SMPL-X) topology.

I think that some parts of our pipeline may help. Would you have any advice on this? Or would you know if there is any existing method I can look into? I'm looking forward to talking with you more. Thank you!

liaochenchieh commented 1 week ago

Would it be possible to run your pipeline without subdividing the SMPLX model? (Though it seems that many parts of the code should be changed)

ZhanxyR commented 1 week ago

Apologies for the delay in my response. If you just hope to obtain a SMPLX-based result without subdivision, there is a very simple way. As shown in Fig.S15 in our SupMat. , you can extract only the first 9383 points and the first 18732 triangle faces from the subdivided mesh file to get a SMPLX mesh (but without the eyeballs, which should be manual or adaptively added).

ZhanxyR commented 1 week ago
  import numpy as np

  def load_obj(file):
      verts = []
      vts = []
      faces = []
      with open(file) as f:
          while True:
              line = f.readline()
              if not line:
                  break
              strs = line.split(" ")
              if strs[0] == "v":
                  verts.append((float(strs[1]), float(strs[2]), float(strs[3])))
              elif strs[0] == "vt":
                  vts.append((float(strs[1]), float(strs[2])))
              elif strs[0] == 'f':
                  faces.append([[int(s) for s in strs[1].split("/")], [int(s) for s in strs[2].split("/")],
                              [int(s) for s in strs[3].split("/")]])
              else:
                  continue
      return np.array(verts, dtype=float), np.array(vts, dtype=float), np.array(faces, dtype=int)

  def save_obj(verts, faces, path_out, single=False, vts=None, colors=None):
      with open(path_out, 'w') as fp:

          fp.write('mtllib material.mtl\nusemtl material\n')

          if colors is not None:
              for i in range(len(verts)):
                  vi_np = np.array(verts[i])
                  color_np = np.array(colors[i])
                  # fp.write('v %f %f %f\n' % (vi_np[0], vi_np[1], vi_np[2]))
                  fp.write('v %f %f %f %f %f %f\n' % (vi_np[0], vi_np[1], vi_np[2], color_np[0], color_np[1], color_np[2]))
          else:
              for vi in verts:
                  vi_np = np.array(vi)
                  fp.write('v %f %f %f\n' % (vi_np[0], vi_np[1], vi_np[2]))

          if vts is not None:
              for vt in vts:
                  vt_np = np.array(vt)
                  fp.write("vt %f %f\n" % (vt_np[0], vt_np[1]))

          for fi in faces:
              ft = np.array(fi)
              if not single:
                  fp.write('f %d/%d %d/%d %d/%d\n' % (ft[0][0], ft[0][1], ft[1][0], ft[1][1], ft[2][0], ft[2][1]))
              else:
                  if len(ft.shape) == 2:
                      ft = ft[..., 0]
                  fp.write('f %d %d %d\n' % (ft[0], ft[1], ft[2]))

  def downsample(mesh_path, level=1):

      verts, _, faces = load_obj(mesh_path)

      info = np.load('downsample_level1.npz', allow_pickle=True)

      verts = verts[:info['verts_num']]

      save_obj(verts, info['faces'], 'test_1.obj', vts=info['vts'])

      info = np.load('downsample_level2.npz', allow_pickle=True)

      verts = verts[:info['verts_num']]

      save_obj(verts, info['faces'], 'test_2.obj', vts=info['vts'])

  if __name__ == '__main__':

      # down_level_1_path = 'old/down_1.obj'
      # verts, vts, faces = load_obj(down_level_1_path)
      # np.savez('downsample_level1.npz', verts_num=len(verts), vts=vts, faces=faces)
      # print(vts.shape, faces.shape) # (56196, 2) (18732, 3, 2)
      # exit()

      # down_level_2_path = 'old/down_0.obj'
      # verts, vts, faces = load_obj(down_level_2_path)
      # np.savez('downsample_level2.npz', verts_num=len(verts), vts=vts, faces=faces)

      mesh_path = 'old/0455_smooth_smplx.obj'

      downsample(mesh_path)

@liaochenchieh This is a simple script I have tested. The required .npz file can be download here You can check this function to know about how we deal with the eyeballs.

And one step we did not shown in this repo is to use Real-ESRGAN to SR the projected texture result, which has a great improvement, especially in the human face.

L(after), R(before) image

liaochenchieh commented 1 week ago

@ZhanxyR Thanks for the reply! Does it mean I should create a SMPL-X based texture map by using your downsampling method and replace the mesh_path in the texture projection function with the downsampled mesh?

ZhanxyR commented 1 week ago

I think you should replace after texture projection. That means after you get a subdivided textured mesh, you can directly downsample the mesh file but keep the texture image unchanged.

--------------原始邮件-------------- 发件人:"Chen-Chieh (Jacky) Liao @.>; 发送时间:2024年9月28日(星期六) 中午11:45 收件人:"ZhanxyR/SHERT" @.>; 抄送:"Xiaoyu Zhan @.>;"Mention @.>; 主题:Re: [ZhanxyR/SHERT] How to get the uv information on my own example (Issue #7)

@ZhanxyR Thanks for the reply! Does it mean I should create a SMPL-X based texture map by using your downsampling method and replace the mesh_path in the texture projection function with the downsampled mesh?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>