Open RunningLeon opened 6 years ago
I have tested it on Wider Face dataset with the official evaluation tools, and the result is about 3~5% lower than the original caffe model. I think the author may have adopted some other training strategy like preparing more negative data, because I noticed that his PNET would generate less false candidates than mine. Maybe you can try something like that.
Good luck!
@wangbm are you referring to pretrained/*.npy
? or the models at save_model/all_in_one
? or are they actually the same ?
Hi, Have you evaluated the pretrained model on any other dataset? Can you post it? Thanks a lot.