dbolya / tomesd

Speed up Stable Diffusion with this one simple trick!
MIT License
1.29k stars 78 forks source link

how can i get the result in figure1? #17

Closed JunnYu closed 1 year ago

JunnYu commented 1 year ago
image

i want to get the result in figure1. Can you share your hyperparameters, thanks!

import torch, tomesd
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")

# Apply ToMe with a 50% merging ratio
# tomesd.apply_patch(pipe, ratio=0.5) # Can also use pipe.unet in place of pipe here

# this code will OOM on my 80G A100
# baseline
image = pipe("a photo of an astronaut riding a horse on mars", height=2048, width=2048).images[0]
image.save("astronaut.png")

I want to get the baseline, xformers, xformers+tome

dbolya commented 1 year ago

Those results were obtained using the webui which seems to have better memory management:

All with the same seed of course.

JunnYu commented 1 year ago

thanks, i will try it!

ShoufaChen commented 1 year ago

Hi @dbolya ,

Thanks for your awesome work. Could you provide more details about the 2048x2048 highres fix (30 steps)? Which model did you use to output images at 2048x2048 resolution?

dbolya commented 1 year ago

@ShoufaChen This was the original Stable Diffusion 1.5 model. Highres fix is an option in several stable diffusion uis out there that generates an image at a lower res, and then upscales it, and uses img2img at the higher resolution.

ada-cheng commented 3 months ago

Hi @dbolya , The 2048*2048 image is awesome! I wonder how to add your method to a stable diffusion ui (so that I can do upscale efficiently)? Thank you