gmalivenko / pytorch2keras

PyTorch to Keras model convertor
https://pytorch2keras.readthedocs.io/en/latest/
MIT License
858 stars 143 forks source link

Model's Accuracy Drops A Lot after converting #150

Open MT010104 opened 1 year ago

MT010104 commented 1 year ago

Describe the bug I trained a LeNet5 model with MNIST under Pytorch and used pytorch2keras to transfer it to a Keras model. But I found that the accuracy of the Keras model is only about 10% while the Pytorch model's accuracy is close to 100%. VGG16 models trained with CIFAR10 didn't show a similar situation. For consistency, no additional processing is performed on the data in either framework. I‘ve verified that it's not the data or the model itself that's wrong.

To Reproduce Snippet of your code

class LeNet(nn.Module):
def __init__(self):
        super(LeNet, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(256, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)
    def forward(self, x):
        out = F.relu(self.conv1(x))
        out = F.max_pool2d(out, 2)
        out = F.relu(self.conv2(out))
        out = F.max_pool2d(out, 2)
        out = out.view(int(out.size(0)), -1)
        out = F.relu(self.fc1(out))
        out = F.relu(self.fc2(out))
        out = self.fc3(out)
        return out
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_test = x_test.astype('float32') / 255.0
x_test = x_test.reshape(dataset_name_dict[name])
y_test = keras.utils.to_categorical(y_test, 10)
model = keras.models.load_model("models/mnist_lenet5.h5")
model.summary()
model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.SGD(lr=1e-3, momentum=0.9), metrics=['accuracy'])
y_predict = np.argmax(model.predict(x_test, verbose=0), axis=1)
origin_acc = model.evaluate(x_test, y_test, verbose=0)[1]
print(f"origin_acc : {origin_acc}")

Expected behavior I would like to know why the accuracy drops and if there is a solution.