Thank you for the elegant implementation. It helps a lot!
I am wondering why you need to detect the faces from the VoxCeleb dataset since we already have the face bounding box meta data in this dataset? Are you trying to crop tighter face bboxs instead of using their boxes? What if we train the first order model with the faces cropped by their boxes?
Same question. Besides, it seems that the provided bounding box is not a square bounding box.
For example, the bounding box has a size of (1018 - 648, 553-48), i.e, (370, 505). However, this code directly resizes this rectangle image to a square one, as in here.
Thank you for the elegant implementation. It helps a lot!
I am wondering why you need to detect the faces from the VoxCeleb dataset since we already have the face bounding box meta data in this dataset? Are you trying to crop tighter face bboxs instead of using their boxes? What if we train the first order model with the faces cropped by their boxes?