Open Atcold opened 8 years ago
@Atcold thanks. It is unlikely that I will have spare GPUs to train it in the near future.
Oh, OK, I'll train it myself then. Hopefully I get comparable results, which could broaden your comparison.
Also, what's about NIN 0(?)
and inception-v2 51.00(?)
? You didn't include any additional note.
I couldn't get specified performance for these models. either due to preprocessing or to class ordering probably.
"class ordering probably"? What do you mean?
there is classes.t7
which generated randomly sometimes and NIN authors used a different one from usually used in caffe. it is random for example in https://github.com/soumith/imagenet-multiGPU.torch. I just didn't have enough courage to fix that
Hmm... Not sure I got it right.
Anyhow, your inception-v2
I believe is actually GoogLeNet
(without first layer factorisation).
Why not BVLC-GoogLeNet (inception 1) there? Is there some reason for it?
@ducha-aiki no one cared to import it from caffe, there are better models trained in torch
@szagoruyko well, there is no any Inception 1 model in your evaluation, wherever they were trained :)
@ducha-aiki I believe @Atcold should know better, he did an evaluation paper with these networks recently. Alfredo, can you help us? Is inception-v2
in my table actually a GoogLeNet
Here I am, sorry for the delay, I was on vacation.
There is a bit of a confusion, with nomenclature. I'll try to shade some light. (Ref. http://arxiv.org/abs/1602.07261) Inception is the name of the whole Google's family networks, using parallel convolutions with different receptive fields.
The only correct GoogLeNet model around, is this -> https://github.com/e-lab/imagenet-multiGPU.torch/blob/trueGoogLeNet/models/googlenetFixed_cudnn.lua which is not trained. All others models are not GoogLeNet, but some approximations.
In my analysis, I dismissed the Inception-v2 from this repo, because it was not a reliable reproduction of the original GoogLeNet (the input convolution should have been factorised, brining a substantial operation count saving).
I hope I have somehow answered. If not, please, feel free to bug me 😊
I was thinking that, for completeness, you may want to train a GoogLeNet as well. Here's the model.