afiaka87 / clip-guided-diffusion

A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
MIT License
457 stars 62 forks source link

Shouldn't it be "pil_img" instead of "input"? #20

Closed PytaichukBohdan closed 2 years ago

PytaichukBohdan commented 2 years ago

https://github.com/afiaka87/clip-guided-diffusion/blob/54d273e714b41126c23f889fc7bb7851b56e5c74/cgd/clip_util.py#L59

PytaichukBohdan commented 2 years ago

Or even np.array(pil_img) But still, the code isn't running properly after those changes

Got an error: RuntimeError: adaptive_avg_pool2d(): Expected input to have non-zero size for non-batch dimensions, but input has sizes [1, 1066, 0, 226] with dimension 2 being empty

afiaka87 commented 2 years ago

@PytaichukBohdan

Indeed, thanks for filing an issue! I'll patch it.

afiaka87 commented 2 years ago

https://github.com/afiaka87/clip-guided-diffusion/commit/c81fde0936b3c7f550a072581ac8d09dc4db22de should have fixed it. let me know if it works for you.

afiaka87 commented 2 years ago

@PytaichukBohdan - ah yes, another issue you may be facing is that you have to use multiples of (i believe) 16 (for size < 128) and 32 for the offsets.

htoyryla commented 2 years ago

Still does not work. ResizeRight is expecting either a numpy array or a torch tensor, now it gets a PIL image which does not have shape attribute.

This is what I tried and at least it runs without an error

    t_img = tvf.to_tensor(pil_img)
    t_img = resize_right.resize(t_img, out_shape=(smallest_side, smallest_side),
                                  interp_method=lanczos3, support_sz=None,
                                  antialiasing=True, by_convs=False, scale_tolerance=None)
    batch = make_cutouts(t_img.unsqueeze(0).to(device)) 

I am not sure what was intended here as to the output shape. As it was it made 1024x512 from 1024x1024 original, for image_size 512, now this makes 512x512.

I am not using offsets, BTW.

As to the images produced, can't see much happening, but I guess that is another story. According to my experience guidance by comparing CLIP encoded images is not very useful as such, so I'll probably go my own way to add other ways as to image based guidance. This might depend on the kind of images I work with and how. More visuality than semantics.

PS. I see now that the init image actually means using perceptual losses as guidance, rather than initialising something (like one can do with VQGAN latents for instance). So that's more like what I am after.

htoyryla commented 2 years ago

Or even np.array(pil_img) But still, the code isn't running properly after those changes

Got an error: RuntimeError: adaptive_avg_pool2d(): Expected input to have non-zero size for non-batch dimensions, but input has sizes [1, 1066, 0, 226] with dimension 2 being empty

I tried that also first. I guess it fails as the numpy array has shape (h, w, c) while (I think) (c, h, w) is expected. Using to_tensor takes care of this.