JoaoLages / diffusers-interpret

Diffusers-Interpret ๐Ÿค—๐Ÿงจ๐Ÿ•ต๏ธโ€โ™€๏ธ: Model explainability for ๐Ÿค— Diffusers. Get explanations for your generated images.
MIT License
269 stars 14 forks source link

Update diffusers-interpret to work with the latest diffusers package (0.8.0) #21

Open federicotorrielli opened 1 year ago

federicotorrielli commented 1 year ago

When using diffusers-interpret with the latest diffusers (0.8.0, yes, I need this version because I use Euler discrete scheduler), it gives the following error:

ImportError: cannot import name 'preprocess_mask' from 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint' (/usr/local/lib/python3.7/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py)

Can it be fixed to work with it? 0.3.0 is now very outdated. Thanks!

JoaoLages commented 1 year ago

I understand the frustration, but I tried my best to make this package compatible with diffusers and Hugging Face did not help. I suggest diffusers-interpret users help me and write in this Issue and in this PR, stating that they'd that diffusers incorporated those changes, to make packages like diffusers-interpret to be fully compatible for newer versions. Usually, the Hugging Face community helps when there is a considerable amount of people interested.

JoaoLages commented 1 year ago

Basically, they need to accept text_embeddings in this method call and also remove the @torch.no_grad() decorator, and this package would not depend on their minor updates.

federicotorrielli commented 1 year ago

I saw your PR on diffusers.. can you open two PRs for the two different problems explaining why? I will follow you there.

JoaoLages commented 1 year ago

To be honest, all the needed changes were made in this PR. It's not worth it for me to redo the code if they have not approved it previously. Try to comment in that PR and tag a couple of maintainers to get their attention to the lack of compatibility with packages like this ๐Ÿ˜ž

federicotorrielli commented 1 year ago

I started writing in different PRs, let's see if they really make a change. In the meantime, it would be awesome to have a pypi diffusers package adapted to be compatibile with this but on the latest version!

federicotorrielli commented 1 year ago

@JoaoLages we can follow this suggestion and open the two feature requests.

keturn commented 1 year ago

@invoke-ai would also like an API that lets us provide our own text_embeddings, as it's necessary for us and anyone else who does any kind of token-weighting.

We're still in the very messy middle phase of our diffusers integration, and we haven't started trying to submit pipeline API changes like this upstream yet, but I think we'd be very much onboard for a PR or feature request for that.

As things stand at the moment, diffusers really expects an application this sophisticated will more or less completely disregard their existing StableDiffusionPipeline class and write their own.

There has been some progress on this front, for example StableDiffusionPipeline now exposes methods like prepare_latents and decode_latents an application-specific subclass can take advantage of. But Patrick seems to really like long methods and hasn't been prioritizing extensibility.

(InvokeAI is interested in adding some interpretation aids. We'll see how this goes.)