yfeng95 / PRNet

Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network (ECCV 2018)
http://openaccess.thecvf.com/content_ECCV_2018/papers/Yao_Feng_Joint_3D_Face_ECCV_2018_paper.pdf
MIT License
4.95k stars 947 forks source link

How to prepare the position map? #5

Closed ShownX closed 6 years ago

ShownX commented 6 years ago

Hello, Yao,

Very impressive work. Can you explain a little bit about generating the ground-truth position map?

yfeng95 commented 6 years ago

Hi Shown, sorry for the late reply . I just explain the process in #4.

You only need two steps to generate position map:

  1. generate the vertices from 300W_LP dataset, then modify it a little bit(As describe in section 3.1, make sure the x,y coordinates of generated vertices are corresponding to the 2d face, the min value of z coordinates is 0)
  2. render the generated vertices with UV coordinates(re-sample). Here, the generated vertices is used as a texture(replace r,g,b with x,y,z).

And, I can explain the difference between position map and PNCC(emm..another widely used representation).

  1. About the value to render. PNCC is the normalized coordinates from the mean 3DMM shape, and position map is the 3D coordinates of input face. In training CNN, I also normalized the position map, so the value of both representation is from 0-1, which can be easily to learn.
  2. About the render space. PNCC is the projected vertices of input face(so the input and output image are pixel-pixel correspondence), and position map is the parameterized coordinates(UV space) of the mean 3DMM shape. Obviously, position map records a more complete shape and is fixed(convenient for designing loss). Besides, position map is more convenient for finding specific points(like landmarks, only use a index rather than find nearest value in PNCC). Ah, like position map is an inverse of PNCC : ) Anyway, you can generate position map by replacing these two values in PNCC(which can be generated using code from 3DDFA),

And... as so many people are interest in how to generate position map, I will spend time to write a clear version and release it(maybe two months later, I am busy these days).

Finally, thanks for your interest in our work.

developer-mayuan commented 6 years ago

@YadiraF For step 2: Is there an automatic way to align the x,y coordinates of generated vertices are corresponding to the 2d face?

yfeng95 commented 6 years ago

Hi @developer-mayuan, if you generate training data using 300W_LP, the vertices are cooresponding to the 2d face. If you use other datasets, you may need ICP to register the mesh to a template then get the correspondence.

sunjunlishi commented 5 years ago

great

saslamsameja commented 3 years ago

Hi Shown, sorry for the late reply . I just explain the process in #4.

You only need two steps to generate position map:

  1. generate the vertices from 300W_LP dataset, then modify it a little bit(As describe in section 3.1, make sure the x,y coordinates of generated vertices are corresponding to the 2d face, the min value of z coordinates is 0)
  2. render the generated vertices with UV coordinates(re-sample). Here, the generated vertices is used as a texture(replace r,g,b with x,y,z).

And, I can explain the difference between position map and PNCC(emm..another widely used representation).

  1. About the value to render. PNCC is the normalized coordinates from the mean 3DMM shape, and position map is the 3D coordinates of input face. In training CNN, I also normalized the position map, so the value of both representation is from 0-1, which can be easily to learn.
  2. About the render space. PNCC is the projected vertices of input face(so the input and output image are pixel-pixel correspondence), and position map is the parameterized coordinates(UV space) of the mean 3DMM shape. Obviously, position map records a more complete shape and is fixed(convenient for designing loss). Besides, position map is more convenient for finding specific points(like landmarks, only use a index rather than find nearest value in PNCC). Ah, like position map is an inverse of PNCC : ) Anyway, you can generate position map by replacing these two values in PNCC(which can be generated using code from 3DDFA),

And... as so many people are interest in how to generate position map, I will spend time to write a clear version and release it(maybe two months later, I am busy these days).

Finally, thanks for your interest in our work.

Hi, @YadiraF , I wanted to know about your point that, " Anyway, you can generate position map by replacing these two values in PNCC". Can I generate uv map (position map) from a given pncc (.jpg) file? If so then how? You have mentioned above that by replacing "these two values" in pncc we can generate uv map. Which two values are you referring to? And what will they be replaced with? Thanks.