Closed Tetragramm closed 1 year ago
Same here
Ah, the merge function assumes the image is divisible by sx and sy, but the image gets downsampled by 8x in each direction for the latent space, so in reality if sx and sy are 2, then the image needs to be divisible by 16 in each direction.
1080 is not divisible by 16, which is why you get that error. I can fix this though by setting the extra tokens to be src tokens. Give me one sec.
Fixed. Tested and was able to generate 1920x1080 images. Feel free to re-open if there are still any issues.
Unfortunately, I do not believe it worked entirely. Or at all. This is the error for 1920x1080 now.
File "d:\libraries\tomesd\tomesd\patch.py", line 48, in _forward
m_a, m_c, m_m, u_a, u_c, u_m = compute_merge(x, self._tome_info)
File "d:\libraries\tomesd\tomesd\patch.py", line 21, in compute_merge
m, u = merge.bipartite_soft_matching_random2d(x, w, h, args["sx"], args["sy"], r, not args["use_rand"])
File "d:\libraries\tomesd\tomesd\merge.py", line 71, in bipartite_soft_matching_random2d
a, b = split(metric)
File "d:\libraries\tomesd\tomesd\merge.py", line 65, in split
src = x.gather(dim=1, index=a_idx.expand(B, N - num_dst, C))
RuntimeError: The expanded size of the tensor (6180) must match the existing size (6060) at non-singleton dimension 1. Target sizes: [2, 6180, 640]. Tensor sizes: [1, 6060, 1]
Hmm odd, I was able to generate a 1080p image on my end with the web ui. Admittedly though this was my own implementation, I haven't used the plugin. I also tested with the diffusers implementation I just merged and it also worked. I had to use 640x1080 as an example because 1920x1080 runs out of memory without xformers for me, but here's the code to reproduce:
import torch, tomesd
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
# Apply ToMe with a 50% merging ratio
tomesd.apply_patch(pipe, ratio=0.5)
image = pipe(
"a photo of an astronaut riding a horse on mars",
width=640, height=1080, num_inference_steps=20).images[0]
image.save("astronaut.png")
Ah, I see the issue now. I can reproduce your error with
tomesd.apply_patch(pipe, ratio=0.3, max_downsample=2)
Seems an assumption I made didn't hold for max_downsample > 1. I'll work on a fix.
Okay, turns out I'm dumb and forgot to apply one of the changes to the width and height of the image. Should work now.
Some resolutions cause errors while running the model.
The following error is from a 1920x1080 run. 1024x1024 works though.
I am running as part of Auto1111, but the extension is working fine and calling the tomesd library fine. Settings are