Open ivanpanshin opened 4 years ago
Actually, a follow-up question. A found out that SAN was trained with GT bounding boxes from 300-W, not bounding boxes from detector. That said, how is it even possible to inference SAN, since there is no way to compute GT-like BB for new image?
We provide several kinds of bounding box including both GT and DET (obtained from the detector) [https://github.com/D-X-Y/landmark-detection/blob/master/SAN/cache_data/generate_300W.py#L54] A possible way to solve your problem is to extract the bounding box by yourself and re-train SAN.
I don't understand. How to extract bounding box, if my image is not from 300w?
Hi!
From the previous issues I understand that it's not a good idea to use some random face detection algorithm to detect a face and then apply SAN, since it was trained on bounding boxes just like in 300W. For instance, I downloaded celeba dataset with its own bounding box coordinates and ran inference with SAN. Since bounding boxes are different (rectangular in nature) it performs poorly.
Okay, I get it. But how to get a bounding box just like in 300W? Yeah, they released the code to generate bounding boxes for their dataset. Also, there are some ground truth boxes. But, say, I have my own image. Where to get a face detector in order to get a bounding box just for my image just like in 300W so that SAN performs like it should?