google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
26.98k stars 5.1k forks source link

Understanding canonical face model #4910

Open saadi297 opened 11 months ago

saadi297 commented 11 months ago

I am trying to understand the canonical face model. I tried to project the vt coordinates but I am not getting the right landmark indices. Please see the following code and figure:

import numpy as np
import cv2
import math
def load_obj_file(file_path):
    uv_coords = []
    with open(file_path, 'r') as f:
        for line in f:
            if line.startswith('vt '):
                uv = line.strip().split()[1:]
                uv = [float(coord) for coord in uv]
                uv_coords.append(uv)
    uv_coords = np.array(uv_coords)
    return uv_coords
obj_path = 'canonical_face_model.obj'
uv_coords = load_obj_file(obj_path)
white_img = np.ones((2048,2048 , 3), dtype=np.uint8) * 255
img_h, img_w = white_img.shape[:2]
uv_coords[:, 1] = 1 - uv_coords[:, 1]
uv_coords[:, 0] = uv_coords[:, 0] * white_img.shape[1]
uv_coords[:, 1] = uv_coords[:, 1] * white_img.shape[0]
for num, uv in enumerate(uv_coords):
    cv2.circle(white_img, (int(uv[0]), int(uv[1])), 1, (0, 255, 0), -1)
    cv2.putText(white_img, str(num), (int(uv[0]), int(uv[1])), cv2.FONT_HERSHEY_SIMPLEX, 0.3, (0, 0, 255), 1)
cv2.imwrite('canonical_face_model.png', white_img)

canonical_face_model You can see that the landmark indices are different from this image I then tried projecting these uv coordinates and got the correct indices as shown below. I also noticed that the uv points in the vertex_buffer is same as the one given here canonical_face_model_1 As a last experiment, I also tried to convert canonical_face_model.obj file to geometry_pipeline_metadata_landmarks.pbtxt based on this comment and using this code . The result was different from this file. I would really appreciate it if anyone can explain me on how to get correct indices from canonical_face_model.obj file. Do I need to perform some mapping?

kuaashish commented 6 months ago

Hi @saadi297,

Apologies for the delay in response. Could you please confirm whether you still require assistance in resolving this issue or if it has been resolved from your end?

Thank you!!

saadi297 commented 6 months ago

Yes, I still require assistance

jseobyun commented 3 months ago

same issue. I am also confused because of mismatch between OBJ face and canonical face model.png.

Is there any index map between two??

jseobyun commented 3 months ago

@saadi297 I find out correct indexing mapping between the output of detector and given OBJ file.

First of all, I conclude that only "canonical_face_model.obj" is available which has 468 keypoints without iris.

The reason is that "face_model_with_iris.obj" contains wrong uv coordinates in additional 10 iris keypoints.

When visualizing only 10 extra iris points of "face_model_with_iris.obj", the result is like below. obj_index

iris 10 points are overlapped on same position. "face_model_with_iris.obj" file is broken file.


the order of keypoints in "canonical_face_model.obj" is as follow.

obj_index

the order of output keypoints of detector is as follow. (it is same as official canonical_face_model_uv_visualization.png)

det_index


If you want to convert the output of detector to OBJ vertex order, use this mapping. det2obj.json

Reversely, you can convert the OBJ vertex order to detector's order by following file.

obj2det.json


I hope it will help you !