in
1 generator = torch.Generator(device=device).manual_seed(1024)
2 with autocast("cuda"):
----> 3 images = pipe(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5, generator=generator)["sample"]
1 frames
in call(self, prompt, init_image, strength, num_inference_steps, guidance_scale, eta, generator, output_type)
56
57 # encode the init image into latents and scale the latents
---> 58 init_latents = self.vae.encode(init_image.to(self.device)).sample()
59 init_latents = 0.18215 * init_latents
60
AttributeError: 'AutoencoderKLOutput' object has no attribute 'sample'
I found a similar problem in a different notebook.
There they responded:
We need to update this notebook for diffusers==0.3.0!
For now could you replace:
latents = vae.encode(batch["pixel_values"]).sample().detach()
with
latents = vae.encode(batch["pixel_values"]).latent_dist.sample().detach()
but I couldn't find out what to change in this Stable Craiyon notebook
Here is the error:
AttributeError Traceback (most recent call last)
1 frames
AttributeError: 'AutoencoderKLOutput' object has no attribute 'sample'
I found a similar problem in a different notebook. There they responded:
but I couldn't find out what to change in this Stable Craiyon notebook