Closed Huxwell closed 3 years ago
Hi Huxwell, Did you follow the data preparation process? Actually, there are 31957 gts in WIDER FACE validation set, not 3906.
Thanks for a quick reply! It proves there is something wrong with my data.
Do you have the data preparation process description, or code for it available? I will now try to use the following repo https://github.com/qdraw/tensorflow-face-object-detector-tutorial with data from http://shuoyang1213.me/WIDERFACE/ and see if it helps.
Please check in #11
After following the instructions I got: which is even weirder for me.. I will try again from scratch.
I have made sure that I am following the instructions correctly, but I am still getting the 0.824 mAP results as above.
The images are downloaded from: https://drive.google.com/file/d/0B6eKvaijfFUDd3dIRmpvSk8tLUk/view?usp=sharing, from official dataset webpage: http://shuoyang1213.me/WIDERFACE/
The model is downloaded from the link in this repository: https://drive.google.com/file/d/1zU738coEVDBkLBUa4hvJUucL7dcSBT7v/view?usp=sharing
XML annotations from the repository you refer to in instructions: https://github.com/akofman/wider-face-pascal-voc-annotations/tree/master/WIDER_val_annotations
Than I run the bash command from to get val.txt:
ls -l data/WIDERFace/WIDER_val/Annotations | grep "^-" | awk '{print $9}' | cut -d '.' -f 1 > data/WIDERFace/WIDER_val/val.txt
(src: https://github.com/Media-Smart/vedadet/blob/main/configs/trainval/tinaface/gen_xml_name_txt.sh)
I have made a colab (jupyter) notebook that runs these commands and doesn't require setup: https://drive.google.com/file/d/1eIC3zGbns9PhO3FFEj2zodIzBHZ8JaTT/view?usp=sharing , resulting in 0.824 mAP If I manage to get to proper numbers I will create a pull request with polished version of this notebook.
Thanks again for your kind assistance!
Hi @Huxwell, Sorry for that, we missed a step in the evaluation. In order to align with the official evaluation, the generated xml files should be further processed to filter out the face boxes which should be ignored in evaluation. That's why the number of gts is larger. Please check this.
Hey, thanks a lot, I was finally able to get 0.923 mAP!
The linked notebook will now produce such results without any setup.
I wonder though, why filtering out some boxes is required? Are the results reliable? Is such filtering in also done for other models, i.e RetinaFace or yolo v5?
Like what I said before, the filtered boxes is based on official evalution tools, not arbitrary. There are some mat files in folder 'eval_tools/ground_truth'. In these mat files, the required boxes are defined.
tinaface_r50_fpn_widerface.pth测试,按照上面的过程做了一遍,还是下面的结果,做了filter_widerface_val.py以后感觉xml没什么变化? +-------+-------+---------+--------+-------+ | class | gts | dets | recall | ap | +-------+-------+---------+--------+-------+ | face | 31957 | 8420592 | 0.837 | 0.628 | +-------+-------+---------+--------+-------+ | mAP | | | | 0.628 | +-------+-------+---------+--------+-------+
Hello, I'm getting the following results. Can anyone tell why I'm getting such low results? I've followed your link: https://github.com/Media-Smart/vedadet/tree/main/configs/trainval/tinaface
And also this one: https://blog.csdn.net/qq_35975447/article/details/110430390
But still getting very poor results.
I have made sure that I am following the instructions correctly, but I am still getting the 0.824 mAP results as above.
The images are downloaded from: https://drive.google.com/file/d/0B6eKvaijfFUDd3dIRmpvSk8tLUk/view?usp=sharing, from official dataset webpage: http://shuoyang1213.me/WIDERFACE/
The model is downloaded from the link in this repository: https://drive.google.com/file/d/1zU738coEVDBkLBUa4hvJUucL7dcSBT7v/view?usp=sharing
XML annotations from the repository you refer to in instructions: https://github.com/akofman/wider-face-pascal-voc-annotations/tree/master/WIDER_val_annotations
Than I run the bash command from to get val.txt:
ls -l data/WIDERFace/WIDER_val/Annotations | grep "^-" | awk '{print $9}' | cut -d '.' -f 1 > data/WIDERFace/WIDER_val/val.txt
(src: https://github.com/Media-Smart/vedadet/blob/main/configs/trainval/tinaface/gen_xml_name_txt.sh)I have made a colab (jupyter) notebook that runs these commands and doesn't require setup: https://drive.google.com/file/d/1eIC3zGbns9PhO3FFEj2zodIzBHZ8JaTT/view?usp=sharing , resulting in 0.824 mAP If I manage to get to proper numbers I will create a pull request with polished version of this notebook.
Thanks again for your kind assistance!
thanks a lot for ur colab notebook man
cd ${vedadet_root} python configs/trainval/tinaface/test_widerface.py configs/trainval/tinaface/tinaface.py tinaface_r50_fpn_widerface.pth
I have run the TinaFace example on WiderFaces validation set getting only 0.825 mAP (I was expecting smth north of 0.9). Is this the expected val score, or rather I must have messed up with dataset setup? Is there any chance you can share setup you use to adapt WiderFaces to your pipeline?