Closed bobetocalo closed 7 years ago
That's just the convention we use (the original *.pts were created by guys using Matlab and thus 1-indexed).
The 300w dataset comes in two versions. In one it assumes the existence of a detector (the one we followed) and the other does not. You can find more about this here. A DPM based detector was used for the detection.
https://ibug.doc.ic.ac.uk/media/uploads/documents/sagonas_2016_imavis.pdf
Dear colleagues,
First of all I would like to congratulate you for your excellent work. I found a critical difference in the bounding boxes annotations. Why your annotations have been added with 1 pixel extra? For example: Image: afw/823016568_2.pts
Your annotations: Bounding box: [ 397.500 445.500 397.500 709.500 685.500 709.500 685.500 445.500 ]
Original annotations: bb_detector: 396.5,444.5,684.5,708.5
On the other hand, how did you compute the bounding boxes for the 300w private images? Are you using a public face detector? I have not found it in the website. "As opposed to last year's Challenge (300-W 2013), in this version we will not provide any face detection initializations. Each system should detect the face in the image and then localize the facial landmarks."
I look formard to your response.
Best regards, Roberto Valle