Open martinheidegger opened 2 years ago
I extracted this new face alignment variant from @fyr91's face alignment code that should be added to this PR:
import skimage
import numpy
import cv2
# reference - by default 112x112 input size
glint_align_dst = numpy.array(
[
[30.2946, 51.6963],
[65.5318, 51.5014],
[48.0252, 71.7366],
[33.5493, 92.3655],
[62.7299, 92.2041]
],
dtype=numpy.float32
)
# move all points 8 pixel to the right
glint_align_dst[:,0] += 8.0
def glint_align(image, landmarks):
# makesure imput image size is 112x112
# resize image to 112x112
# image = cv2.resize(image, (112,112))
# other wise if 112x96, need to comment out 8 pixel shift in the reference pts
rightEyePts = landmarks[36:42]
leftEyePts = landmarks[42:48]
nose_tip = landmarks[33]
mouth_left = landmarks[48]
mouth_right = landmarks[52]
leftEyeCenter = leftEyePts.mean(axis=0).astype("int")
rightEyeCenter = rightEyePts.mean(axis=0).astype("int")
src = numpy.asarray([rightEyeCenter, leftEyeCenter, nose_tip, mouth_left, mouth_right], dtype=numpy.float32)
# https://scikit-image.org/docs/stable/api/skimage.transform.html#skimage.transform.SimilarityTransform
tform = skimage.transform.SimilarityTransform()
tform.estimate(src, glint_align_dst)
M = tform.params[0:2, :]
image = cv2.warpAffine(image, M, (112,112), borderValue=0.0)
return image
@fyr91 I am looking at the destination points and I am wondering why the eyes/mouth points are not level (same y coordinate), wouldn't an algorithm like this be reasonable?
import numpy
def glint_align_dst (width, height):
center_x = width / 2
eye_left = center_x - height * 0.1585
eye_right = width - eye_left
eye_y = height * 0.46
nose_x = center_x
nose_y = height * 0.64
mouth_y = height * 0.8241
mouth_left = center_x - height * 0.13
mouth_right = width - mouth_left
return numpy.array(
[
[eye_left, eye_y],
[eye_right, eye_y],
[nose_x, nose_y],
[mouth_left, mouth_y],
[mouth_right, mouth_y]
],
dtype=numpy.float32
)
@fyr91 this alignment is seems very different from the previous alignments. Before the left eye was at 28.8 x 33.6
now it is at 30.2946 x 51.6963; Seems like with this we need to (re) train the model again, right?
Result of working on first dockerized version of KYCDeepFace and other refactorings.
Contains:
1-to-1 API
https://github.com/tradle/KYCDeepFace/blob/69b00018fd5446f755bdff3d9fd37a6f103a037d/vision/kycdeepface.py#L53 https://github.com/tradle/KYCDeepFace/blob/69b00018fd5446f755bdff3d9fd37a6f103a037d/vision/kycdeepface.py#L112
lambda docker API
https://github.com/tradle/KYCDeepFace/blob/69b00018fd5446f755bdff3d9fd37a6f103a037d/%CE%BB.py#L33-L38
AWS Deployment
.github/workflows/deploy.yml
which actively deploys to AWS
Refactoring of configuration of Image Processing configuration
https://github.com/tradle/KYCDeepFace/blob/69b00018fd5446f755bdff3d9fd37a6f103a037d/config.py#L58-L69
Refactored first two CLIs:
bin/pytorch-to-onnx.py
bin/webcam-alignment-example.py
First set of unit tests
test.py
(executed as github actions)Contains the first set of models for tests.