Closed ShownX closed 6 years ago
Hi Shown, sorry for the late reply . I just explain the process in #4.
You only need two steps to generate position map:
And, I can explain the difference between position map and PNCC(emm..another widely used representation).
And... as so many people are interest in how to generate position map, I will spend time to write a clear version and release it(maybe two months later, I am busy these days).
Finally, thanks for your interest in our work.
@YadiraF For step 2: Is there an automatic way to align the x,y coordinates of generated vertices are corresponding to the 2d face?
Hi @developer-mayuan, if you generate training data using 300W_LP, the vertices are cooresponding to the 2d face. If you use other datasets, you may need ICP to register the mesh to a template then get the correspondence.
great
Hi Shown, sorry for the late reply . I just explain the process in #4.
You only need two steps to generate position map:
- generate the vertices from 300W_LP dataset, then modify it a little bit(As describe in section 3.1, make sure the x,y coordinates of generated vertices are corresponding to the 2d face, the min value of z coordinates is 0)
- render the generated vertices with UV coordinates(re-sample). Here, the generated vertices is used as a texture(replace r,g,b with x,y,z).
And, I can explain the difference between position map and PNCC(emm..another widely used representation).
- About the value to render. PNCC is the normalized coordinates from the mean 3DMM shape, and position map is the 3D coordinates of input face. In training CNN, I also normalized the position map, so the value of both representation is from 0-1, which can be easily to learn.
- About the render space. PNCC is the projected vertices of input face(so the input and output image are pixel-pixel correspondence), and position map is the parameterized coordinates(UV space) of the mean 3DMM shape. Obviously, position map records a more complete shape and is fixed(convenient for designing loss). Besides, position map is more convenient for finding specific points(like landmarks, only use a index rather than find nearest value in PNCC). Ah, like position map is an inverse of PNCC : ) Anyway, you can generate position map by replacing these two values in PNCC(which can be generated using code from 3DDFA),
And... as so many people are interest in how to generate position map, I will spend time to write a clear version and release it(maybe two months later, I am busy these days).
Finally, thanks for your interest in our work.
Hi, @YadiraF , I wanted to know about your point that, " Anyway, you can generate position map by replacing these two values in PNCC". Can I generate uv map (position map) from a given pncc (.jpg) file? If so then how? You have mentioned above that by replacing "these two values" in pncc we can generate uv map. Which two values are you referring to? And what will they be replaced with? Thanks.
Hello, Yao,
Very impressive work. Can you explain a little bit about generating the ground-truth position map?