Rassibassi / mediapipeDemos

Real-time Python demos of google mediapipe
127 stars 38 forks source link

mediapipe bounding box or rotate angle of the head #22

Open isdito opened 1 year ago

isdito commented 1 year ago

Good night,

I am looking for a way with the face points array, get its bounding box and face rotation (tilt, yaw, roll).

mat = np.array(face_landmarks.landmark)

It returns the array of face points if I can't find out how it's rotated, at least through the boundigbox knowing its size

Can you help me, reggards

CODE


camera = cv2.VideoCapture("F:/C0010.MP4")

mp_face_mesh = mp.solutions.face_mesh
mp_drawing = mp.solutions.drawing_utils 

with mp_face_mesh.FaceMesh(
    static_image_mode = False,
    max_num_faces = 1,
    min_detection_confidence=0.5) as face_mesh:

    while True:
        ret, frame = camera.read()
        if (ret == 0):
            break

        frame_rgb = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
        results = face_mesh.process(frame_rgb)

        if results.multi_face_landmarks is not None:
            for face_landmarks in results.multi_face_landmarks:
                mp_drawing.draw_landmarks(frame,face_landmarks)

                ys, zs, xs = getaxis(face_landmarks.landmark)
                mat = np.array(face_landmarks.landmark)
                print (rotation_angles(mat, "XYZ"))

        cv2.imshow("FRAME",frame)
        k = cv2.waitKey(1) & 0xFF

        if (k == 27):
            break

camera.release()
Rassibassi commented 1 year ago

Isn't the headposture example what you need? https://github.com/Rassibassi/mediapipeDemos/blob/main/head_posture.py

There you have a vector pointing in the direction of the face, including mediapipe and cv2 rotation vectors. The bounding box can be inferred from the vector and its normal plane.

isdito commented 1 year ago

Hello Rasmus,

I check the code and I think that works but have any issues. Because I'm not a programmer, I'm a videographer and I try to get to do things with python.

I don't know if it can be improved, I have some doubts.

I leave you the original video at 100fps of my sons and the final result that is at 25 fps slowing down everything(This has not been wanted)

Original : https://www.video-boda.es/C0010.MP4

FInal : https://www.video-boda.es/filename.avi

Problem 1:

When i remove this line frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)

give me this error

     landmarks = landmarks[:, :468]
NameError: name 'landmarks' is not defined

Fragment code


frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
#img_h, img_w = frame.shape[:2]
results = face_mesh.process(frame)
cv2.imshow("FRAME",frame)
multi_face_landmarks = results.multi_face_landmarks

if multi_face_landmarks :
face_landmarks = multi_face_landmarks[0]
landmarks = np.array([(lm.x, lm.y, lm.z) for lm in face_landmarks.landmark]
)
# print(landmarks.shape)
landmarks = landmarks.T

if refine_landmarks is not None:
landmarks = landmarks[:, :468]

metric_landmarks, pose_transform_mat = get_metric_landmarks(landmarks.copy(), pcf)

Secodn problem

I understand that of the rotation vector I have to look at the Y component that every time it approaches 0 it seems that it is looking at the camera

what do you think? do you think it can be improved

what do i want

Slow down the recorded video sequence to 100 fps when a person looks at the camera and then return to normal speed Apply a sine curve function that will slow down the frame rate for x time and speed it up again

Rassibassi commented 1 year ago

I'm sorry, but I can't help you with such specific issue. You will have to learn python and figure it out yourself.

isdito commented 1 year ago

Thank you