Open snapo opened 12 months ago
This is difficult to compete with the standard autoencoder approach because with the current RLS approach we can only fine-tune one layer of the weights. You can do this: 784 -> 784-> 784 1- Make a random projection of the 784 inputs into 784 nodes (or more) in the first layer. 2 - Add a non-linear activation function like relu or tanh 3 - From the output of the non-linear layer use RLS to map this non-linear output into the 784 outputs where the output = input For better performance increase the number of the neurons in the middle layer i.e more than 784 but this can be computational intensive because RLS has O(n2) complexity.
thats a pretty good idea :-) only O(n2) will "kinda" be a problem :-)
Thanks for sharing...
Hi, Did you somehow figure out how it would be possible to create a auto encoder with RLS? for example with the mnist dataset to remove noise OR create new numbers....
normaly the autoencoder does something like 784 -> 256 -> 784 for either compression or to create new images if one starts from the 256 hidden layer. Is this somehow possible?