Closed Tinarights closed 3 years ago
Is there anyway to train the autoencoder and classifier jointly? I think the representation can be optimized to give the best classification results instead of minimizing the reconstruction loss solely.
@xuefeng7 I am also interested in it. Did you have solution?
Yep, you can "connect" features from discriminator (classifier) to decoder. Another approach provided with Adversarial Autoencoders: https://towardsdatascience.com/a-wizards-guide-to-adversarial-autoencoders-part-4-classify-mnist-using-1000-labels-2ca08071f95
hello everyone, I have the same problem. I am trying to find a useful code for improving classification using autoencoder. I followed this example keras autoencoder vs PCA But not for MNIST data, I tried to use it with GTSR dataset. This is my code
from keras.layers import Input, Dense from keras.models import Model from keras import regularizers from keras.datasets import mnist from keras import backend as K import numpy as np import matplotlib.pyplot as plt import pickle from matplotlib import pyplot
import cv2 import pandas as pd
use_regularizer = True my_regularizer = None my_epochs = 100 features_path = 'simple_autoe_features.pickle' labels_path = 'simple_autoe_labels.pickle'
if use_regularizer:
# note use of 10e-5 leads to blurred results
# my_regularizer = regularizers.l1(10e-8)
my_regularizer = regularizers.l1(10e-12)
# and a larger number of epochs as the added regularization the model
# is less likely to overfit and can be trained longer
my_epochs = 100
features_path = 'sparse_autoe_features.pickle'
labels_path = 'sparse_autoe_labels.pickle'
encoding_dim = 2048 # 32 floats -> compression factor 24.5, assuming the input is 784 floats
input_img = Input(shape=(1024, ))
encoded = Dense(encoding_dim, activation='relu', activity_regularizer=my_regularizer)(input_img)
decoded = Dense(1024, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)
encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))
from keras import optimizers from keras.optimizers import SGD
customAdam = optimizers.Adam(lr=0.001) #you have no idea how many times I changed this number autoencoder.compile(optimizer=customAdam, # Optimizer
loss="mean_squared_error",
# List of metrics to monitor
metrics=["accuracy"])
train = pd.read_pickle('./traffic-signs-data/train.p') test = pd.read_pickle('./traffic-signs-data/test.p') (xtrain1, ) = train['features'], train['labels']
x_train = [] x_test = []
for i in x_train1: i = cv2.cvtColor(i, cv2.COLOR_RGB2GRAY) x_train.append(i)
(x_test1, y_test) = test['features'], test['labels'] for i in x_test1: i = cv2.cvtColor(i, cv2.COLOR_RGB2GRAY) x_test.append(i)
x_train = np.array(x_train) x_test = np.array(x_test) x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) print(x_train.shape) print(x_test.shape)
history = autoencoder.fit(x_train, x_train, epochs=my_epochs, batch_size=128, shuffle=True, validation_data=(x_test, x_test), verbose=2)
encoded_imgs = encoder.predict(x_test) decoded_imgs = decoder.predict(encoded_imgs)
_, train_acc = autoencoder.evaluate(x_train, xtrain, verbose=0) , test_acc = autoencoder.evaluate(x_test, x_test, verbose=0) print(train_acc, test_acc)
pyplot.subplot(211) pyplot.title('Loss') pyplot.plot(history.history['loss'], label='train') pyplot.plot(history.history['val_loss'], label='test') pyplot.legend()
"""pyplot.subplot(212) pyplot.title('Accuracy') pyplot.plot(history.history['train_acc'], label='train') pyplot.plot(history.history['val_acc'], label='test') pyplot.legend() pyplot.show()"""
pickle.dump(encoded_imgs, open(features_path, 'wb')) pickle.dump(y_test, open(labels_path, 'wb'))
n = 6 # how many digits we will display plt.figure(figsize=(10, 2), dpi=100) for i in range(n):
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(32, 32))
plt.gray()
ax.set_axis_off()
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[i].reshape(32, 32))
plt.gray()
ax.set_axis_off()
plt.show()
here is the output Epoch 99/100
please somebody help me if you get the answer.
Hello, I have an issue i think it is from dimensions: I am trying to find a useful code for improving classification using autoencoder. I followed this example keras autoencoder vs PCA But not for MNIST data, I tried to use it with cifar-10
so I made some changes but it seems like something is not fitting. Could any one please help me in this? if you have another example that can run in different dataset, that would help.
the validation in reduced.fit, which is (X_test,Y_test) is not learned, so it gives wronf accuracy in .evalute() always give val_loss: 2.3026 - val_acc: 0.1000 This is the code, and the error:
`
`
Here is the output