zhuhao-nju / facescape

FaceScape (PAMI2023 & CVPR2020)
862 stars 94 forks source link

How to find the landmark of a random vector? #122

Open yeluoo opened 8 months ago

yeluoo commented 8 months ago

Angled faces cannot be produced here.


    random_id_vec = np.random.normal(model.id_mean, np.sqrt(model.id_var))

    # create random expression vector
    exp_vec = np.zeros(52)
    exp_vec[0] = 1

    # generate full head mesh
    mesh_full = model.gen_full(random_id_vec, exp_vec)

    # render
    depth_full, image_full = render_cvcam(trimesh.Trimesh(vertices = mesh_full.vertices, 
                                                      faces = mesh_full.faces_v-1),
                                          Rt = Rt)```
yeluoo commented 8 months ago

@zhuhao-nju

icewired-yy commented 8 months ago

@yeluoo What's wrong with this code? Is the rendered result blank?

yeluoo commented 8 months ago

There is no problem with the code, I want to get the angled face @icewired-yy

icewired-yy commented 8 months ago

@yeluoo I see.

If you want to get the angled face in the rendered image, you can modify the RT matrix. The RT matrix consists of the rotation matrix at [0:3, 0:3] and the camera position at [0:3, 3]. You can modify both of these two parts to control the RT matrix, as well as the final view angle in rendering.

If you want to export an angled face model(vertex coordinates have been modified), you can try applying the rotation matrix to the vertices, since the model is Topologically Uniformed.

All the advice above is based on my experience. Hope it will be helpful to you.

yeluoo commented 8 months ago

I am currently using the solution you mentioned. Adjusting the RT matrix is ​​equivalent to adjusting the perspective. What I want is for the face to move left and right. These are two different things. @icewired-yy @icewired-yy

icewired-yy commented 8 months ago

@yeluoo Sorry for misunderstanding your questions.

Do you mean moving the face left or right in the rendered image? We can consider the original face locates at the center of the rendered image.

icewired-yy commented 8 months ago

@yeluoo Sorry for misunderstanding your questions.

Do you mean moving the face left or right in the rendered image? We can consider the original face locates at the center of the rendered image.

@yeluoo If that's right, you can try to modify the vertex coordinates of the model before rendering it. Just like what is used in official implementation. For example in fit demo:

mesh_tm = trimesh.Trimesh(vertices = mesh.vertices.copy(), 
                              faces = fs_fitter.fv_indices_front-1, 
                              process = False)
mesh_tm.vertices[:, :2] = mesh_tm.vertices[:, 0:2] - np.array([src_img.shape[1] / 2, src_img.shape[0] / 2])
mesh_tm.vertices = mesh_tm.vertices / src_img.shape[0] * 2
mesh_tm.vertices[:, 2] = mesh_tm.vertices[:, 2] - 10

The above code moves the model from the image space ([0 ~ resolution]) to the OpenGL NDC (-1 ~ 1). And this operation is equivalent to the left or right moving operation you want.

yeluoo commented 8 months ago

哈喽,我感觉你没有懂我意思,你有QQ吗,或者你加我1830343214,这里感觉说不清楚 @icewired-yy

yeluoo commented 8 months ago

Does the bilinear model have the original 20 expressions? I don’t need many of the expanded 52 expressions.

My goal is to use a parametric model to generate landmarks for faces with different fatness, thickness, postures, and expressions. @icewired-yy @icewired-yy @icewired-yy

icewired-yy commented 8 months ago

Does the bilinear model have the original 20 expressions? I don’t need many of the expanded 52 expressions.

My goal is to use a parametric model to generate landmarks for faces with different fatness, thickness, postures, and expressions. @icewired-yy @icewired-yy @icewired-yy

@yeluoo The FaceScape dataset has already fitted the 20 expressions for every participant with its bilinear model, they are stored in the TU model part of the FaceScape dataset.

To extract the landmarks from a generated face model, or any FaceScape bilinear model, use the landmark_indices.* provided under the toolkit/predef in the facescape repository. Use these indices to get the 2D or 3D coordinates of the landmark vertices.

By the way, FaceScape has no posture dimension in its bilinear model. If you want to generate face model with different posture, try the FLAME. FLAME has three dim: shape expression and posture.

The 52-dim expression vector doesn't mean it can only generate 52 expressions, you can generate the expression vector from normal distribution just like what you have done with the identity vector.

yeluoo commented 8 months ago

Thank you for your patient answer, I have another question。

random_color_vec = (np.random.random(100) - 0.5) * 100
                mesh = model.gen_face_color(
                    id_vec=id_vec,
                    exp_vec=exp_vec,
                    vc_vec=random_color_vec,
                )
depth, face_full = renderer.render_cvcam(
                    trimesh.Trimesh(vertices=mesh.vertices, 
                                    faces = mesh.faces_v-1,
                                    vertex_colors = mesh.vert_colors),
                    K=K,
                    Rt = Rt,
                    rend_size = (768, 768)
                )

The background of the rendering result here is white. I hope the background is black or there is no background. Where should I modify it? @icewired-yy @icewired-yy @icewired-yy

icewired-yy commented 8 months ago

Thank you for your patient answer, I have another question。

random_color_vec = (np.random.random(100) - 0.5) * 100
                mesh = model.gen_face_color(
                    id_vec=id_vec,
                    exp_vec=exp_vec,
                    vc_vec=random_color_vec,
                )
depth, face_full = renderer.render_cvcam(
                    trimesh.Trimesh(vertices=mesh.vertices, 
                                    faces = mesh.faces_v-1,
                                    vertex_colors = mesh.vert_colors),
                    K=K,
                    Rt = Rt,
                    rend_size = (768, 768)
                )

The background of the rendering result here is white. I hope the background is black or there is no background. Where should I modify it? @icewired-yy @icewired-yy @icewired-yy

Glad to hear that my suggestion is helpful to you @yeluoo.

The default background of the rendered results from pyrender is white. Maybe some settings may change the background. However, my solution is to use the depth map to get the mask to modify the final result.

You see, the output of rendering is depth and color. The depth of the background is zero, so we can use:

''' Render scan mesh '''
colorImage, depthImage = scanRenderer(calibratedScanMesh, K, Rt, light)

''' Mask out the face region '''
validRegionMask = depthImage != 0
validRegionMask = validRegionMask[..., np.newaxis]
colorImage = colorImage * validRegionMask

This is the part of my code, and you can try my solution. Hope it will be helpful to you.