Closed xjj980226 closed 2 years ago
To run the model on other videos you need to write your own face alignment code. We used an alignment to 5 points (similar to code you can find here https://github.com/deepinsight/insightface/blob/master/python-package/insightface/utils/face_align.py). As alignment is dependent on face model, bounding box detector, landmark detector, and alignment code, I suggest retraining the network with data that comes from your aligment pipeline to get better results. To add the mask image/channel to the input image use the code from our face_alignment.py.
Excuse me, Except for the video in the data set, Can I test other videos?If I can test, how to achieve face alignment and access to its files?