-
-
[You posted on Reddit](https://www.reddit.com/r/MachineLearning/comments/l4rnfv/p_why_are_stacked_autoencoders_still_a_thing/).
I think this is very cool.
In the Reddit post you ask if you misse…
-
Hi dear author, I have carefully read your paper and it has greatly inspired me. I would like to ask if I should follow these steps if I want to conduct experiments on my own dataset.
1、 Using train_…
supgy updated
3 months ago
-
I cannot download the checkpoint from the download assets and all associated links. Please check.
-
I implemented the CatVTON approach with SDXL Inpainting as the base model including DREAM. And the loss curve looks good & drops to ~0.001 after several epochs. However, the resulting images are just…
-
Do they only differ between the use of VAE to encode the inputs into embedding (and the conditional input part)? So if I wanted to make this in latent space, I'd use wrap this whole thing within the V…
-
Given that the Generator is being treated as a black box, I'm guessing we could probably use this with a conditional VAE as well, by just running it on the VAE's Decoder. Does that seem reasonable, or…
-
Hi!
I am trying to reproduce Fig. 3, in particular the umap of muris with the different cell types.
I have trained the models on the muris dataset.
I try to get the conditional samples…
-
### Describe the bug
Running CFGCutoffCallback with ControlNet SDXL will raise following error
````
diffusers/src/diffusers/models/attention.py:372, in BasicTransformerBlock.forward(self, hidden_st…
-
I am training UNet part of latent diffusion with a conditional encoder. I have added one extra module of image encoder for reference. It's been a week I ran the training on almost 3000 images and it h…