Matlab/Octave toolbox for deep learning. Includes Deep Belief Nets, Stacked Autoencoders, Convolutional Neural Nets, Convolutional Autoencoders and vanilla Neural Nets. Each method has examples to get you started.
I want to use SAE for dimension reduction but its not clear to me how to get the hidden layer activations. The activations stored in the SAE struct seem to be of incorrect dimensions.
The following was trained on mnist which has 60,000 training samples of 784 dimension. How can I get the activations of these 60,000 training samples in the hidden layer (which would be 100 dimensions in this case).
Here is some of the struct output for the code below:
output: 'sigm'
W: {[100x785 double] [784x101 double]}
vW: {[100x785 double] [784x101 double]}
p: {[] [1x100 double] [1x784 double]}
a: {[100x785 double] [100x101 double] [100x784 double]}
e: [100x784 double]
L: 8.1274
dW: {[100x785 double] [784x101 double]}
Code from example SAE:
...
sae = saesetup([784 100]);
...
sae = saetrain(sae, train_x, opts);
I want to use SAE for dimension reduction but its not clear to me how to get the hidden layer activations. The activations stored in the SAE struct seem to be of incorrect dimensions.
The following was trained on mnist which has 60,000 training samples of 784 dimension. How can I get the activations of these 60,000 training samples in the hidden layer (which would be 100 dimensions in this case).
Here is some of the struct output for the code below: output: 'sigm' W: {[100x785 double] [784x101 double]} vW: {[100x785 double] [784x101 double]} p: {[] [1x100 double] [1x784 double]} a: {[100x785 double] [100x101 double] [100x784 double]} e: [100x784 double] L: 8.1274 dW: {[100x785 double] [784x101 double]}
Code from example SAE: ... sae = saesetup([784 100]); ... sae = saetrain(sae, train_x, opts);