Closed walsharry closed 4 months ago
@walsharry Sadly, I think you're using it correctly! TAESD currently isn't as high-quality as the full-size VAE. I think TAESDXL and TAESD3 generally look better than the base TAESD (the SDXL and SD3 latent spaces are easier to decode), but if you want max quality you should always use the full-size VAEs.
SD-VAE→TAESD | SDXL-VAE→TAESDXL | SD3-VAE→TAESD3 |
---|---|---|
@madebyollin thanks for the quick response. I am currently trying to use the TEASD decoder during training so that I can add an Image loss to a model. The newer TEASD3 has twice the embedding dimensions and is too computationally expensive. Do you believe it is possible to train an image generation network with the smaller TEASD decoder and then at inference swap to the original?
Also is there any documentation/papers about how you create the teasd models?
Thanks,
Harry
Do you believe it is possible to train an image generation network with the smaller TEASD decoder and then at inference swap to the original?
Yeah that should definitely work (https://tianweiy.github.io/dmd/ does this iirc). You can also always fine-tune your model with the full-size VAE at the end.
is there any documentation/papers about how you create the teasd models?
There's no paper or full report. I did post some example code here that shows how to do basic TAESDXL training with pure adversarial loss. The released TAESD checkpoints used slightly more complicated training recipes (with some augmentations, auxiliary regression losses, etc.) but the core structure is the same. I've answered questions about training here and here among other places.
Thanks again for the help! Really appreciated
Hey madebyollin,
Back again. Another quick question, I want to use stable diffusion 2.1 to encode an image and then taesd to decode it. However, the image quality is much lower compared to using the SD2.1 decoder. Is this expected or am I doing something wrong?
Input image
SD2.1 decoded
Teasa decoded
Thanks again for the help, very much appreciated,
Harry