Closed Xu-Yao closed 2 years ago
Hello @Xu-Yao ,
Thank you for your interest in our work. The accuracy reported is on the test set following the AdienceFaces protocol.
I am not familiar with the ffhq dataset, does it have age labels? what is the accuracy you received?
Gil
the accuracy is very low when I convert it to coreml model and test it on Xcode
as u can see ,the second image is the same image u tested in you demo python notebook
Here is the convert code
import coremltools
# model
# prototxt
# labels: ['(0, 2)','(4, 6)','(8, 12)','(15, 20)','(25, 32)','(38, 43)','(48, 53)','(60, 100)']
coreml_model = coremltools.converters.caffe.convert(('age_net.caffemodel', 'deploy_age.prototxt'), image_input_names='data', class_labels='age_net_labels.txt')
print(coreml_model)
coreml_model.save('age_net_caffe.mlmodel')
age_net_labels.txt
content below:
(0, 2)
(4, 6)
(8, 12)
(15, 20)
(25, 32)
(38, 43)
(48, 53)
(60, 100)
@GilLevi
After test on Xcode, I thought it might be the problem of the conversion. so I tried to test it on Colab with python exact the same way as u did in your demo python notebook, but I got this error(issue link), so.....I cannot verify if it's the problem of conversion
Hello,
First thank you so much for publishing the code and the pretrained models !
I would like to know a little more about the age classification accuracy you mentioned in your paper, is it for the training set or the test set ? Because I tested your model with ffhq dataset https://github.com/NVlabs/ffhq-dataset, with downsampling image to 256x256, swapping to BGR, subtracting your mean image and center cropping to 227x227. But I got a very poor result, far more below the accurary you presented in your paper :(