soeaver / caffe-model

Caffe models (including classification, detection and segmentation) and deploy files for famouse networks
MIT License
1.28k stars 626 forks source link

Can't reproduce the reported accuracies in Inception-V4 and resnext101-64x4d. #52

Open latifisalar opened 6 years ago

latifisalar commented 6 years ago

Hi, I'm facing problems with the caffe models of Inception-V4 and resnext101-64x4d. It's not possible for me to get the reported accuracies and I just get around 0.1% which is just a random guess. I've tried it using either my own python script(which is derived from caffe example) or the provided script(I'm aware of the crop_size and base_size and change them accordingly). I've downloaded the validation images from ImageNet and using their own val.txt which is sorted unlike yours. Do you have any idea what can be the problem? Thanks

soeaver commented 6 years ago

I think you should pay attention on image pre-processing:

The RGB channel or BGR channel; The mean value and std value

ashishfarmer commented 6 years ago

I am seeing the same problem as @latifisalar that I am not able to run Inception-V4 and see the desired accuracy on ILSVRC2012_val set. I am using mean val as 128 and crop size 395. @soeaver Could you share your transform_param that you used for the test?

ashishfarmer commented 6 years ago

Never mind. Figured it out - it is in the evaluation_cls.py

kmonachopoulos commented 6 years ago

I am using the classification script, with the correct configuration parameters on inceptionv3 and v4 and get really bad results. Even single image inference gives wrong results (miss-classified). The same scripts work well in other networks like vgg but not inception with pre-trained v3 - v4. Do you know how to fix that??

latifisalar commented 6 years ago

My problem was the label orders. The order of the class labels in Inception networks was different than the orders in the vgg network.

kmonachopoulos commented 6 years ago

So, what .txt file did you use for the annotation ??

This is a reference of the file I am using :

1: 'goldfish, Carassius auratus',
 2: 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias',
 3: 'tiger shark, Galeocerdo cuvieri',
 4: 'hammerhead, hammerhead shark',
 5: 'electric ray, crampfish, numbfish, torpedo',
 6: 'stingray',
 7: 'cock',
 8: 'hen',
latifisalar commented 6 years ago

The one you have is for the vgg. I've attached the synsets for the Inception networks. synsets.txt Update: New validation file for Inception: inception_val.txt

kmonachopoulos commented 6 years ago

I still get wrong results with that annotation list .. I have tried lot of annotation lists that i found online (including the one that you gave me) and it seems that non of them give the correct results... I think that this has to do with the model.