TimoBolkart / TF_FLAME

Tensorflow framework for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 2D or 3D keypoints, and how to generate textured head meshes from Images.
http://flame.is.tue.mpg.de/
439 stars 78 forks source link

Question #19

Closed flynnamy closed 4 years ago

flynnamy commented 4 years ago

Thank you for your good work. I have a question about texture-mapping.How to get texture mapping? How to establish links bewteen the source image and UV image? @TimoBolkart

TimoBolkart commented 4 years ago

Maybe I misunderstand your question, but we provide a demo build_texture_from_image.py to do this mapping. Given an image and a fitted mesh (with UV map), this demo projects the image onto the 3D mesh and creates the partial texture map for it. Does this demo not answer your question?

flynnamy commented 4 years ago

No,I mean that how to understand valid_pixel_ids, valid_pixel_3d_faces and valid_pixel_b_coords? @TimoBolkart

TimoBolkart commented 4 years ago

I see, for the texture mapping, we first sample the UV map with a dense grid (i.e. a 2D point per pixel), these are denoted as x_coords and y_coords. Not all of these grid points actually map to the 3D surface as some fall outside the defined texture atlas. The valid_pixel_ids are the ids of the grid points that actually map to the surface (hence they are valid). valid_pixel_b_coords denote the corresponding Barycentric coordinates to embed them to the 3D mesh surface. valid_pixel_3d_faces are all faces that correspond to the valid_pixel_3d_faces . Given all these, pixel_3d_points = v[valid_pixel_3d_faces[:, 0], :] * valid_pixel_b_coords[:, 0][:, np.newaxis] + \ v[valid_pixel_3d_faces[:, 1], :] * valid_pixel_b_coords[:, 1][:, np.newaxis] + \ v[valid_pixel_3d_faces[:, 2], :] * valid_pixel_b_coords[:, 2][:, np.newaxis] gives for each UV grid point the corresponding 3D vertex within the mesh surface (i.e. v[valid_pixel_3d_faces[:, i], :] are the 3D vertices of the corresponding 3D mesh triangle, and valid_pixel_b_coords[:, 0][ are the corresponding Barycentric weights). Then we just project all these densely sampled 3D points (i.e. each corresponding to a pixel in the UV map) into the image and store the RGB color values of this projected point at the UV map point location.

flynnamy commented 4 years ago

What is the algrithm of sampling UV map with a dense grid? If I want to make texture completed, I think I should resolve it.What is your suggestion about completed texture because of occlusion or being not visible? @TimoBolkart

flynnamy commented 4 years ago

Another question is about expression. You said FLAME models facial expressions with a linear expression space computed by PCA. What is the expression basis?I think you could share the basis with us?

TimoBolkart commented 4 years ago

What is the algrithm of sampling UV map with a dense grid? If I want to make texture completed, I think I should resolve it.What is your suggestion about completed texture because of occlusion or being not visible? @TimoBolkart

It is trivially just two for loops to get (x,y) coordinates for every pixel. You can just plot the x_coords and y_coords in an image plot then you will it

Another question is about expression. You said FLAME models facial expressions with a linear expression space computed by PCA. What is the expression basis?I think you could share the basis with us?

FLAME uses linear blendskinning to rotate / open the jaw, the rest of the expression motion is modeled by the linear PCA expression space. So in total, you would consider the full expression of a face to be a combination of the jaw pose AND the linear expression space. However the models are publicly available, so all basis vectors are stored there.

flynnamy commented 4 years ago

Thanks for your reply.Why the dimension of coords is 235219? Where are all basis vectors stored? Could you please make it clearly? So I can get 100 dimension of expression basis vectors and 300 dimension of shape basis vectors.

flynnamy commented 4 years ago

Hi,how do you get lmk_b_coords? If I want to add landmarks of nose because it has no restricts of nose. I @TimoBolkart

D0miH commented 4 years ago

Hello @TimoBolkart, I have a similar question. I would like to generate higher resolution textures from a photo than the current resolution of 512x512. So far I am able to calculate the valid_pixel_ids for higher resolution texture maps. But I am a bit lost on how the valid_pixel_3d_faces and the valid_pixel_b_coords were obtained. Did you use some software (like meshlab) to get the mapping of the pixels of the texture map to the corresponding 3D faces of the mesh? Or are the arrays containing the 3D faces of the mesh and the array containing the UV indices aligned such that the first triangle on the UV map corresponds to the first face of the 3D mesh?

I would be very grateful if you could briefly explain the idea on how to obtain those values.

TimoBolkart commented 4 years ago

Hello @TimoBolkart, I have a similar question. I would like to generate higher resolution textures from a photo than the current resolution of 512x512. So far I am able to calculate the valid_pixel_ids for higher resolution texture maps. But I am a bit lost on how the valid_pixel_3d_faces and the valid_pixel_b_coords were obtained. Did you use some software (like meshlab) to get the mapping of the pixels of the texture map to the corresponding 3D faces of the mesh? Or are the arrays containing the 3D faces of the mesh and the array containing the UV indices aligned such that the first triangle on the UV map corresponds to the first face of the 3D mesh?

I would be very grateful if you could briefly explain the idea on how to obtain those values.

I created some texture_data files of resolutions 256x256, 512x512, 1024x1024, and 2048x2048 which you can download from here. I updated the code such that depending on the specified file, it outputs texture maps of different resolutions. I hope that helps you

D0miH commented 4 years ago

That helps indeed. Thank you very much!

TimoBolkart commented 4 years ago

Hi,how do you get lmk_b_coords? If I want to add landmarks of nose because it has no restricts of nose. I @TimoBolkart

We use the standard set of keypoints provided by standard landmark predictors, excluding the 17 landmarks of the face boundary. The lmk_b_coords are the Barycentric coordinates of a particular landmark. So we first specify the triangle a landmark is located in by an integer value, then we have three values that specify the Barycentric weights of the vertices of these triangle.