-
We could support dimensionality reduction through autoencoders. Here's a useful looking tutorial (it looks relatively straightforward to implement all of the variants in the tutorial): https://blog.k…
-
-
Exciting work. Love it that you are exploring autoencoders.
I wish this works similarly well with sparse autoencoders, hence the title of the issue.
-
NNet Models
- Template + Module based Framework
- Feedforward and back-propagation (BP)
- Sparse Autoencoder
- Stacked Autoencoder
Utilities
- Objective templates: L1/L2 norm, softmax, ...
- Trainin…
bobye updated
10 years ago
-
While studying autoencoder architecture, I discovered that the similar terms "transposed convolution" and "deconvolution" have caused some confusion. I would like to clarify their differences and expl…
-
Dear author, hello. I am a graduate student and have a strong interest in your paper: Energy Conservation in Wireless Sensor Networks Using Partly Informed Sparse Autoencoder. While carefully reading …
-
Sparse autoencoder is an autoencoder with additional constraint that most coefficients tend to be zero, as described here: http://deeplearning.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity
I…
-
Hi Team,
Firstly I want to say thank you so much for implementing a truly model-agnostic method for counterfactuals! I've been searching for many months now for a counterfactual tool I can easily …
-
### Model description
https://github.com/noanabeshima/tiny_model
It's a small language model trained on TinyStories for interpretability with sparse autoencoders and transcoders added. It has no…
-
A large number of [examples](https://github.com/apache/incubator-mxnet/tree/v1.7.x/example) in the mxnet official repo is using the Module APIs for training. Since the Module APIs will be removed in m…