-
MNIST Results:
We used a standard variational autoencoder. We trained with inlier digits 1 and 3. Each model was trained with batch size of 128, where each image is an unnormalized grayscale image of…
-
Hi @bogedy
Thanks for sharing your code.
I train the code on a custom dataset. The result is getting better after every epoch, however, the loss is increasing. Looks like you maximize instead of mi…
-
add sections on rnn, lstm, gru, vae, etc used before transformers
-
Hello,
I want to apply Latent Diffusion Model to medical images data. How can I feed training images from a directory of .jpg files to the train the diffusion model ?
Plus, I don't want the mode…
-
-
This slide is very vague:
![image](https://user-images.githubusercontent.com/1888623/99530320-6f607d00-29a1-11eb-9b9b-c4e9c5a71ac7.png)
t03_features.pdf
-
Hi CNTK Team!
I'm currently trying to implement a Variational Autoencoder using CNTK. I'd need functions for random sampling (like random_normal() in Tensorflow) for this. CNTKs backend already has…
-
-
### Description
To use different quantum computing techniques focused on Machine Learning, as a principal base for the quantum autoencoder to be able to reduce data in a medical dataset, also to …
-
The idea is perhaps future-looking, but I'd like to bring it up for discussion.
## Motivations
* Reduce the GPU/NPU memory required for completing a use case (e.g. text2image).
* Reduce the mem…