madebyollin / taesd

Tiny AutoEncoder for Stable Diffusion
MIT License
588 stars 28 forks source link

FR: TAE for SVD #14

Open Ednaordinary opened 7 months ago

Ednaordinary commented 7 months ago

SVD can now get to really fast speeds step wise but is limited by the slow speed of the vae. Any way to distill the temporal spatial auto encoder the same way as the regular auto encoder?

madebyollin commented 7 months ago

Definitely possible! I worry there's a fairly narrow band of usefulness for a TAESVD, though, since for cheap previews you can run TAESD per-frame and for max quality you should just run the SVD VAE.

(I started training a TAESVD with temporal layers a few months ago - top is GT, middle is per-frame TAESD, and bottom is TAESD with temporal layers - but I haven't gotten around to finishing it)

Ednaordinary commented 7 months ago

Since each frame after the first should only be a difference from the previous frame (not an entirely new frame), is it viable to first decode the first frame with the original SVD vae (should only take about the same memory/speed as undistilled regular SD vae in this context) then decode the rest on a vae that is specifically trained on the difference between the current and previous frame in the latents, outputting a difference map to be applied to the last decoded frame? Kinda like a video codec. Moves away from the simplicity of decoding normally though

weleen commented 2 months ago

Definitely possible! I worry there's a fairly narrow band of usefulness for a TAESVD, though, since for cheap previews you can run TAESD per-frame and for max quality you should just run the SVD VAE.

(I started training a TAESVD with temporal layers a few months ago - top is GT, middle is per-frame TAESD, and bottom is TAESD with temporal layers - but I haven't gotten around to finishing it)

Hi @madebyollin, thanks for your work! I am curious to know if there are any updates regarding the TAESVD model.

Since each frame after the first should only be a difference from the previous frame (not an entirely new frame), is it viable to first decode the first frame with the original SVD vae (should only take about the same memory/speed as undistilled regular SD vae in this context) then decode the rest on a vae that is specifically trained on the difference between the current and previous frame in the latents, outputting a difference map to be applied to the last decoded frame? Kinda like a video codec. Moves away from the simplicity of decoding normally though

Hi @Ednaordinary, do you find some alternative ways or repos to speed up the decoding process for SVD?

madebyollin commented 2 months ago

I've uploaded my initial TAESDV checkpoint + code to https://github.com/madebyollin/taesdv. It's still a bit WIP (see the TODOs in the README) but it should be capable of decoding much smoother videos than single-frame TAESD (while still being really fast).