-
### Background and motivation
On some newer x86 CPUs VAES provides wider variants of encoding/decoding included in the older AES instruction set.
The 256-bit VEX-encoded variant (effectively ope…
e4m2 updated
2 months ago
-
I use comfy as a backend in my app, and especially after using many loras, the CPU RAM usage gradually climbs. The weird part is that the RAM usage exceeds the total size of all models/loras/vaes etc.
-
I'm trying to select fp8 for the decoder, but I don't see it. Any help would be appreciated. Thanks in advance.
![tooncrafter-no-fp8](https://github.com/user-attachments/assets/902701fa-10d2-46ca-91a…
-
I'm trying the same thing as https://github.com/huggingface/diffusers/issues/3567 using from_single_file() (assuming this is a renamed from_ckpt().
So far, this is what I have:
pipe = Stable…
-
In pretrained_vq, the models are of different epochs.
Now I am training VQVAE from scratch.
So after the training of VQVAE, how do I choose which model is used for training EMAGE?
Thanks
-
Hi, do you have the trained SELFIES VAE available for download somewhere?
-
With very large open models like SD3 medium and Flux.1 gaining popularity It's becoming comon to provide the diffusion model (unet/diffusion transformer) part of the model and the text encoders separa…
-
Hi,
We're trying the ex_stats_print.c application and we enabled opt.metadata_thp: "always"
However it's never used it seems and we have no clue why (see metadata_thp: 0)
PS: We also ena…
-
A Naive question: how to decode image latents with different resolutions? Since the decode is only trained with specific latent resolutions. Did you split the image with larger resolution with many 51…
-
Hello! Thank you for your time reading this!
Your work of spatial-VAE is very impressive. I really appreciate that you release your code, and I've managed to run your code (```train_mnist.py```) an…