-
Hi Matt,
Great piece of work and great to see that you seem to answer questions posted here!
I came across your paper a few months ago and ported it to Pytorch with the intention of using the same p…
-
**Introduction:** I have trained Autoencoders (vanilla & variational) in KERAS for MNIST images, and have observed how good the latent representation in the bottleneck layer looks for clustering them …
-
Update the following URL to point to the GitHub repository of
the package you wish to submit to _Bioconductor_
- Repository: https://github.com/dongminjung/VAExprs
Confirm the following by edit…
-
I have been struggled to understand a good approach on your paper.
Then, I could not find the role of sampling to perform VAE on inference time as below.
Also, if you have only used the sampling fun…
kanul updated
3 years ago
-
**Submitting author:** @converseg (Geoffrey Converse)
**Repository:** https://github.com/converseg/ML2Pvae
**Version:** v1.0.0
**Editor:** Pending
**Reviewer:** Pending
**Managing EiC:** Daniel S. Kat…
-
The tutorials need a thorough scrubbing. Let's use this issue to coordinate the needed work and sign up for fixing various tutorials.
CC @peastman @neel-shah
-
## 🐛 Bug
The VAE model contained here takes an optional `kl_coeff` parameter that's supposed to be a scaling factor for the KL term of the variational loss. This might be useful to avoid the ["poster…
-
GANs and Variational Autoencoders are pretty popular, but probably not the only technologies to exist. We may see different benefits from other models. Compile research on potential avenues.
-
hello, thanks for sharing this code. I am confused about a question. I find MNIST data are continuous(pixel value not just 0 and 1), so why can we use Bernoulli distribution in decoder?
egrcc updated
3 years ago
-
I guess your work is an implementation of this paper:
[Constrained Generation of Semantically Valid Graphs via Regularizing Variational Autoencoders](https://papers.nips.cc/paper/7942-constrained-gen…