Closed aristofun closed 6 years ago
The RGB image (input_img
) is used for dlib (it assumes RGB images).
The input to the age-gender-estimation model is BGR images (cropped from img
not input_img
; yes, it's confusing).
https://github.com/yu4u/age-gender-estimation/blob/master/demo.py#L99
And, btw, what are the image size you used for pretrained model provided (weights.28-3.73.hdf5)? Thank you in advance.
The size is 64x64.
Thank you so much!
It seems like you're feeding RGB images to the Net in demo https://github.com/yu4u/age-gender-estimation/blob/master/demo.py#L83
But looks like it is trained on BGR default opencv images: https://github.com/yu4u/age-gender-estimation/blob/master/create_db.py#L55
Please clarify. And, btw, what are the image size you used for pretrained model provided (weights.28-3.73.hdf5)? Thank you in advance.