-
Please check whether this paper is about 'Voice Conversion' or not.
## article info.
- title: **SoftGAN: Learning generative models efficiently with application to
CycleGAN Voice Conversion**
- summ…
-
Hello,
I was wondering if you guys are planning to make functional APIs for advanced research-based models like VAE, VAEGAN, GANS, etc.
If so then it will be very easier for the TensorFlow community…
-
Thanks for your great work! I noticed that StableSR uses `ldm.models.autoencoder.AutoencoderKL` instead of `ldm.models.autoencoder.VQModel` as its pre-trained autoencoder. Since VQGAN has a codebook t…
-
Hello @Glaciohound ,
Thank you for sharing the code of your great work.
I read your paper and code but cannot find which of your 50 CUB classes were used for testing.
Can you share how you chose th…
-
Hello, what type of GPU are you using for training, what is the number, and how long do you need to train?
-
Hi @akshitac8 ,
I wanted to clarify the procedure of retrieving the finetuned features. The features provided in the datasets are ResNet101 features trained on ImageNet. How do you achieve the fine…
RitiP updated
2 years ago
-
I notice that you use sent attributes of CUB dataset, but we usually use att attributes on CUB, like CLSWGAN/TF-VAEGAN and so on. Could you please provide your result when using att attributes for fa…
-
So what/How many gpus did you use for training each dataset? How much time for each one
-
I have one a small problem with your provided checkpoints.
I downloaded the 3DSSG dataset from here: https://campar.in.tum.de/public_datasets/3DSSG/3DSSG.zip including the `classes.txt` which has …
-
In the process of reading your paper, I always feel like tfvaegan. I can confirm from your code that you refer to tfvaegan. Except you removed the feedback module of tfvaegan and proposed SAMC-loss. O…