-
Hello and thank you for a great repo!
I was wondering if it would make sense to add the Gromov-Wasserstein Autoencoders.
code: https://github.com/ganmodokix/gwae
-
Hi,
Thank you so much for this nice implementation. I am trying to readapt it to my input dataset.
However, I didn't get this parameter which is num_units or num_filters. Do you mean by that numbe…
-
Hello,
Correct me if I'm wrong but my understanding is that the simplified expression of the Wasserstein distance, obtained in theorem 1 relies heavily on the hypothesis that the latent codes distr…
-
https://arxiv.org/pdf/1703.10717.pdf
We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Ne…
leo-p updated
7 years ago
-
beta-VAE is also very good ref : http://openreview.net/forum?id=Sy2fzU9gl
Learning an interpretable factorised representation of the independent data gen- erative factors of the world without super…
-
Hi @Lillliant,
This issue page is for your second task on SEERa. As you know, SEERa has a layered structure and its second layer is tml (topic modeling layer). For now, we have 3 methods in SEERa t…
-
Regarding the [example_aae.py](https://github.com/bstriner/keras-adversarial/blob/master/examples/example_aae.py):
Can anyone explain how this code works without having the KL divergence included for…
-
-
## Keyword: volume render
There is no result
## Keyword: volumetric render
There is no result
## Keyword: remote render
There is no result
## Keyword: hybrid render
There is no result
## Keyword: …
-
## A brief description of what your model does
Trained this dataset (https://zenodo.org/records/8333916) on RAVE v2, Wasserstein regularisation, I exported this model at around 2.1M steps. The result…