Closed maurizioidini closed 3 years ago
I am facing the same issue, I found an example : https://github.com/fchollet/keras/blob/8f4d6fc3fa6cd35a36de190a5e44ab4817cc68e8/examples/mnist_sae.py by fcholet, tried the arcitecture. but it is entirely changed, there is no autoencoder class and co container module also. I'm really wondering why "stacking a fucking autoencoder is that hard?". Seemslike the developers want to keep it as a secret. It is ok when you pre-train and every stuff. But what if I want to use the pre-trained weights and constute a new network, for initial weights i want to use my pre-trained weights and biases. well there is no fucking clue. Sorry about my language, but i'm gonna get insane. I've implemented a network using matlab, it took my months, now i need to confirm my network with tensorflow- because it is so easy to use(it is absolutely not if you're not asuming that whole deep learning means doing some shitty stuff on mnist) and developed by google. I cant use it, i cant even initialize the network with my pre-trained weights. Well this is my thesis, im still trying to find a clue, its been 3 weeks. God damn you developers.
I would also appreciate some feedback on this issue.
First of all, @pinareceaktan thank you for sharing that link. Based on information given in that link I was able to solve the DAE problem in keras
Here's how I designed my DAE
```
input = Input(shape=(input_dim,))
encoded = GaussianNoise(stddev=0.3)(input)
encoded = Dropout(0.1)(encoded)
encoded = Dense(hidden_dim,activation='relu')(encoded)
decoded = Dense(input_dim, activation=activation)(encoded)
ae = Model(input=input, output=decoded)
# optimizer configuration
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
#compile layers
ae.compile(loss='categorical_crossentropy', optimizer='SGD')
#pretrain + finetune autoencoder
ae.fit(train_data, train_data, batch_size=batch_size, nb_epoch=epochs, verbose=1)
```
While pre-training your layers, you keep input and output as the same in your .fit() function ( see my code above ) . But to build a SDAE which classifies input with a classifier, you need to fine-tune your model and therefore your .fit() function should be provided with input data and output label
What you need to understand here is that fine-tuning takes place with the .fit() function . If you were to check the source code, you will see that the weights are updated along all layers.
Now to build a n-layer SDAE, treat each layer as a seperate Denosing Autoencoder (DA) model and train them individually. Get the weights/bias from these models using .get_weights() function. This is the pre-training step.
With these weights/bias build another model with n-layers and add a 'softmax' activation layer in the end. Now when you call the fit function, your model will be "fine-tuned" using backpropogation by updating weights in conjunction with your softmax layer, which is used to classify your input (deterministic model and not generative anymore)
If you wanted 'svm' classifier instead check this link out https://github.com/fchollet/keras/issues/6090
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
Does someone have a link to a tutorial for SDAE in keras.
have u found it, i really need it.
Hi all. I have not at issue, just a question about training a Stacked Denoising Autoencoder. I saw this blog: https://blog.keras.io/building-autoencoders-in-keras.html it's a very complete article, but I have a doubt regard pretraining and fine tuning. Pretaining is simple, I imagine train all denoising autoencoder separately with input of previous dA. Right? :) my question is about finetuning. I would train entire deep network using gradient descent to train all weights of all layers. For this purpose I need to concatenate all layers with their weights previous trained in pretraining phase, but I have no idea how to do! Have you ever tried? Thank you