Closed protossw512 closed 7 years ago
The preprocessing applied as
def preprocess_input(x):
x = np.divide(x, 255.0)
x = np.subtract(x, 1.0)
x = np.multiply(x, 2.0)
return x
This is not a typo.
Without knowing what your training data is, it is hard to pinpoint what might be going wrong. Within the next couple days I will be releasing a Keras-2 version of this model with a few extra training related hyperparameters that I did not include in this original release. Perhaps that is what is causing issues for you.
It is a private dataset. It is interesting since the processing method actually normalizes each pixel to [-2, 0]. I switched to tensorflow backend and it worked like a magic. I also tried normalize data to [-2, 0] and [-1, 1], there is little difference on accuracy.
Thank you for your reply and your effort on providing inception implementation on Keras.
+1. I came into the same problem. With TensorFlow backend, I could get 80% classification accuracy with ResNet50 while I only got 70% with InceptionV4. Something goes wrong... By the way, I have modified the codes in Keras 2 style.
I ran resnet-50 and inception-v4 on the same dataset for 100 epoch and here's what i got. In case this might be useful. Accuracy Loss
Tensorflow backend, keras-v2. Both models have last layer replaced by 2 dense. Can confirm inception strangely stops training at ~70% accuracy
The slow convergence doesn't seem to be concordant with the comparison in the InceptionV4 paper, the Figure 25
@zyxue @igrekun @pengpaiSH @protossw512
Hi all! Sorry for taking so long to address these issues. I've added a bunch of training parameters and tweaked a few others so that the Keras version matches the tf.slim training parameters exactly. Hopefully this will fix the issues mentioned above. Thanks again for being so patient!
I'll close this out for now, if anyone runs into similar issues feel free to either open a new issue or comment on this one and I'll re-open it.
Hi, I was trying to remove fully connected layer and only use pre-trained convolution layers to do fine tuning. However, the result was pretty bad. I saw in evaluate_image.py you applied:
def preprocess_input(x): x = np.divide(x, 255.0) x = np.subtract(x, 1.0) x = np.multiply(x, 2.0) return x
It seems like we need to normalize input image to range [-2, 0]. Then I followed this procedure, it do improved a little bit, but still very bad.
I used pretrained ResNet 50 from Kreas to fine tune the model, and got 92% accuracy on testing data, but when I applied the same way, but with Inception-v4 net, I only got 70% accuracy. Something must be wrong, but I couldn't figure out.
3/18 edit:
I found that someone said (http://stackoverflow.com/a/39597537/7555390) for inception the input has to be normalized to [-1, 1], but your code seems like to make it [-2, 0], is that a typo?