syorami / Autoencoders-Variants

Pytorch implementations of various types of autoencoders
66 stars 25 forks source link

sparse_autoencoder_l1, does the l1 constrain really make the representation sparse? #1

Open menglin0320 opened 5 years ago

menglin0320 commented 5 years ago

Did you find any paper or did you do any empirical experiment that proves that simply adding l1 loss to hidden representation encourages sparsity on the hidden representation

syorami commented 5 years ago

Sorry for a late reply. I didn't read the notifications. I couldn't find the original paper where this kind of analysis is raised but I hope this one (Why Regularized Auto-Encoders Learn Sparse Representation?) can help you.

menglin0320 commented 5 years ago

Here is a difference between sparsity on parameter and sparsity on representation. Sparse Autoencoder proposed by Andrew NG is able to learn a sparse representation and it is well known that l1 regularization encourages sparsity on parameters. After I propose the question I looked at your graph again and realize that it's a graph for sparsity on parameters. Then everything makes sense. But Please do not mention this paper on your repo, this paper talks about sparsity on representation. It just confuse people more. And I guess l1 loss doesn't encourage sparsity on representation, but you can try to print out the hidden representation to check if it's true.

syorami commented 5 years ago

Interesting. I thought sparsity on representation means the same as sparsity on parameters. I'll try to figure it out. And sorry I'm quite busy these days.

meshiguge commented 5 years ago

how about Sparse Autoencoder (KL divergence) here ? is there any paper ?