Open udrechsler opened 5 years ago
@udrechsler,
I will share my inference codes in the near future. Key recipes for ImageNet are the following:
cv2.resize(img, (h, w), interpolation=cv2.INTER_CUBIC)
,
while the keras default is img.resize((w, h), pil_image.NEAREST)
.
You can try pil_image.BICUBIC
or opencv2 like me.I'm struggling to get the same numbers shown in the README as well, I consistently fall 7-8% behind what is shown. I've tried implementing the method @taehoonlee described, but I don't think I fully understand the process as my accuracy dropped further. Have you released the code yet?
I think I've got it working now. What do you do in the cases where an image is smaller than 224x224 to begin with?
@BenTaylor3115, Please just keep the ratio 7/8(=224/256). And as far as I know, there are no examples where the image sizes are smaller than 224 on the official ImageNet results.
I didn't think so. I may still have a problem somewhere. Do you have the code available for the down-sampling / cropping you used to achieve the results in the README? I'm happy to do the debugging myself if I have a reference of the correct way.
Hi, it is not clear (at least to me) how the numbers in the README are generated. I'm trying to replicate/validate the listed accuracy values for the last few days, but I'm always off by at least a few percent, regardless what pre-processing I'm using. Would it be possible to add the script used to generate these numbers?
Best regards, Drechsler