Open yuhaoliu7456 opened 7 months ago
@yuhaoliu7456 could you provide more context? I tried myself and the outputs of SD v1.4 VAEDecoder
and ConsistencyDecoder
differ significantly.
Code:
vae = AutoencoderKL.from_pretrained(
"CompVis/stable-diffusion-v1-4",
subfolder="vae",
revision="fp16",
torch_dtype=torch.float16,
).to(device)
sample_vae = vae.decode(latent).sample.detach()
sample_consistency = decoder_consistency(latent)
VAE
ConsistencyDistillation
@yuhaoliu7456 could you provide more context? I tried myself and the outputs of SD v1.4
VAEDecoder
andConsistencyDecoder
differ significantly.Code:
vae = AutoencoderKL.from_pretrained( "CompVis/stable-diffusion-v1-4", subfolder="vae", revision="fp16", torch_dtype=torch.float16, ).to(device) sample_vae = vae.decode(latent).sample.detach() sample_consistency = decoder_consistency(latent)
VAE
ConsistencyDistillation
Can you provide this original image and I can test it by myself? Thanks
@yuhaoliu7456 Sure, you can upload it from this url:
https://img.championat.com/c/900x900/news/big/p/l/real-madrid_1651732892490413154.jpg
Is it possible to load a local VAE encoder instead of pointing to Huggingface?
@Kallamamran if you have directory with model weights and config.json
you can try:
AutoencoderKL.from_pretrained(path_to_local_dir, subfolder="vae" or no subfolder)
I test two images using SD 1.4 from CompVis and this consistencydecoder, and is seems that no obvious difference between of them. Anyone else found this?