Closed mrgloom closed 8 years ago
Our claim is that, on the ImageNet test set of 50,000 images, SqueezeNet accuracy as at least as good as AlexNet accuracy. You have to admit that running one image isn't a good statistical measure of accuracy. :)
Even when training AlexNet multiple times with different random seeds, we've found that some training runs produce an AlexNet model that gets your cat image right, and some that get it wrong. But, zooming out to a larger statistically-significant test set, each training run leads to a model with similar overall accuracy. Same deal with SqueezeNet.
I met the same problem. The output of my test demo is 278 n02119789 kit fox, Vulpes macrotis 151 n02085620 Chihuahua 263 n02113023 Pembroke, Pembroke Welsh corgi 277 n02119022 red fox, Vulpes vulpes 331 n02326432 hare
@auzxb as said before it's 'normal' behaviour, because accuracy calculated on larger set of images and in average it should be near the same as AlexNet accuracy.
You can look at my results, it's really near the same performance: https://github.com/mrgloom/kaggle-dogs-vs-cats-solution
I'm trying to reproduce this example http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb using SqueezeNet, but for this picture https://github.com/BVLC/caffe/blob/master/examples/images/cat.jpg predicted class is 278 which is
n02119789 kit fox, Vulpes macrotis
from https://github.com/HoldenCaulfieldRye/caffe/blob/master/data/ilsvrc12/synset_words.txtIs it normal? Or something is wrong?
Here is full code: