Open yxsysu opened 1 year ago
I use the diffuser and run the codes as follows:
from diffusers import StableUnCLIPImg2ImgPipeline from diffusers.utils import load_image import torch pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" ) pipe = pipe.to("cuda") url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" init_image = load_image(url) images = pipe(init_image).images images[0].save("variation_image.png")
However, I found the results are inferior to the demo: clipdrop.co/stable-diffusion-reimagine
Could you tell me how to reproduce the results of the demo? Thanks for your reply.
I use the diffuser and run the codes as follows:
However, I found the results are inferior to the demo: clipdrop.co/stable-diffusion-reimagine
Could you tell me how to reproduce the results of the demo? Thanks for your reply.