tencent-ailab / hifi3dface

Code and data for our paper "High-Fidelity 3D Digital Human Creation from RGB-D Selfies".
Other
756 stars 153 forks source link

what the lmk2D_68_model and lmk3D_86_model's original name? #8

Closed miaoYuanyuan closed 3 years ago

miaoYuanyuan commented 3 years ago

Hello !

In the paper, you say that real-time facial landmark detection is MobileNet model trained on 300W-LP dataset for 2D landmark. but what the 3D landmark method referenced ?

cyj907 commented 3 years ago

Hello !

In the paper, you say that real-time facial landmark detection is MobileNet model trained on 300W-LP dataset for 2D landmark. but what the 3D landmark method referenced ?

Do you mean the method/loss to train the 3d landmark model? You can find reference https://github.com/1adrianb/face-alignment

miaoYuanyuan commented 3 years ago

Do you mean the method/loss to train the 3d landmark model? You can find reference https://github.com/1adrianb/face-alignment

Yes, it's FAN, But it have only 68 3D point, your code support 86 point, Do you train it by yourself?

cyj907 commented 3 years ago

Do you mean the method/loss to train the 3d landmark model? You can find reference https://github.com/1adrianb/face-alignment

Yes, it's FAN, But it have only 68 3D point, your code support 86 point, Do you train it by yourself?

Yes, we train it using our 86 point annotation. But other 3d landmark detection model can work once you find the correspoinding indices in 3D mesh.

miaoYuanyuan commented 3 years ago

Do you mean the method/loss to train the 3d landmark model? You can find reference https://github.com/1adrianb/face-alignment

Yes, it's FAN, But it have only 68 3D point, your code support 86 point, Do you train it by yourself?

Yes, we train it using our 86 point annotation. But other 3d landmark detection model can work once you find the correspoinding indices in 3D mesh.

Thank you very much ! There are another question: When I use the code in ''texture/step0_unwrap.py'' on BFM model, I can't get proper UV texture. can you give me some suggestion ? something I modified as bellow:

  1. load_3dmm_basis_bfm( ) instead of load_3dmm_basis ( )
  2. assign basis3dmm['tri_vt'] use the value of basis3dmm['tri]
  3. load_BFM_UV.mat as basis3dmm['vt_list']
cyj907 commented 3 years ago

Do you mean the method/loss to train the 3d landmark model? You can find reference https://github.com/1adrianb/face-alignment

Yes, it's FAN, But it have only 68 3D point, your code support 86 point, Do you train it by yourself?

Yes, we train it using our 86 point annotation. But other 3d landmark detection model can work once you find the correspoinding indices in 3D mesh.

Thank you very much ! There are another question: When I use the code in ''texture/step0_unwrap.py'' on BFM model, I can't get proper UV texture. can you give me some suggestion ? something I modified as bellow:

  1. load_3dmm_basis_bfm( ) instead of load_3dmm_basis ( )
  2. assign basis3dmm['tri_vt'] use the value of basis3dmm['tri]
  3. load_BFM_UV.mat as basis3dmm['vt_list']

hello, we do not provide the UV definition for BFM model. Thus, the unwrap code does not work for BFM. If you do want a UV for bfm, you might have to find the correspondence between the provided topo and bfm topo. Thus, you can transfer the UV definition into BFM.

miaoYuanyuan commented 3 years ago

Do you mean the method/loss to train the 3d landmark model? You can find reference https://github.com/1adrianb/face-alignment

Yes, it's FAN, But it have only 68 3D point, your code support 86 point, Do you train it by yourself?

Yes, we train it using our 86 point annotation. But other 3d landmark detection model can work once you find the correspoinding indices in 3D mesh.

Thank you very much ! There are another question: When I use the code in ''texture/step0_unwrap.py'' on BFM model, I can't get proper UV texture. can you give me some suggestion ? something I modified as bellow:

  1. load_3dmm_basis_bfm( ) instead of load_3dmm_basis ( )
  2. assign basis3dmm['tri_vt'] use the value of basis3dmm['tri]
  3. load_BFM_UV.mat as basis3dmm['vt_list']

hello, we do not provide the UV definition for BFM model. Thus, the unwrap code does not work for BFM. If you do want a UV for bfm, you might have to find the correspondence between the provided topo and bfm topo. Thus, you can transfer the UV definition into BFM.

you mean this? the BFM UV: https://github.com/anilbas/3DMMasSTN/blob/master/util/BFM_UV.mat

cyj907 commented 3 years ago

Do you mean the method/loss to train the 3d landmark model? You can find reference https://github.com/1adrianb/face-alignment

Yes, it's FAN, But it have only 68 3D point, your code support 86 point, Do you train it by yourself?

Yes, we train it using our 86 point annotation. But other 3d landmark detection model can work once you find the correspoinding indices in 3D mesh.

Thank you very much ! There are another question: When I use the code in ''texture/step0_unwrap.py'' on BFM model, I can't get proper UV texture. can you give me some suggestion ? something I modified as bellow:

  1. load_3dmm_basis_bfm( ) instead of load_3dmm_basis ( )
  2. assign basis3dmm['tri_vt'] use the value of basis3dmm['tri]
  3. load_BFM_UV.mat as basis3dmm['vt_list']

hello, we do not provide the UV definition for BFM model. Thus, the unwrap code does not work for BFM. If you do want a UV for bfm, you might have to find the correspondence between the provided topo and bfm topo. Thus, you can transfer the UV definition into BFM.

you mean this? the BFM UV: https://github.com/anilbas/3DMMasSTN/blob/master/util/BFM_UV.mat

Something similar. But this UV definition might not be consistent with ours (you have to unwrap the UV and see if the location of facial features are the same). So, the high-quality texture synthesis might not work for this UV definition.

miaoYuanyuan commented 3 years ago

Do you mean the method/loss to train the 3d landmark model? You can find reference https://github.com/1adrianb/face-alignment

Yes, it's FAN, But it have only 68 3D point, your code support 86 point, Do you train it by yourself?

Yes, we train it using our 86 point annotation. But other 3d landmark detection model can work once you find the correspoinding indices in 3D mesh.

Thank you very much ! There are another question: When I use the code in ''texture/step0_unwrap.py'' on BFM model, I can't get proper UV texture. can you give me some suggestion ? something I modified as bellow:

  1. load_3dmm_basis_bfm( ) instead of load_3dmm_basis ( )
  2. assign basis3dmm['tri_vt'] use the value of basis3dmm['tri]
  3. load_BFM_UV.mat as basis3dmm['vt_list']

hello, we do not provide the UV definition for BFM model. Thus, the unwrap code does not work for BFM. If you do want a UV for bfm, you might have to find the correspondence between the provided topo and bfm topo. Thus, you can transfer the UV definition into BFM.

you mean this? the BFM UV: https://github.com/anilbas/3DMMasSTN/blob/master/util/BFM_UV.mat

Something similar. But this UV definition might not be consistent with ours (you have to unwrap the UV and see if the location of facial features are the same). So, the high-quality texture synthesis might not work for this UV definition.

thank you very much ! I see.