dbolya / tomesd

Speed up Stable Diffusion with this one simple trick!
MIT License
1.29k stars 78 forks source link

ToMe doesn't work in InvokeAI #35

Closed HNR1 closed 1 year ago

HNR1 commented 1 year ago

Hi, I tried adding ToMe to InvokeAI. I added tomesd.apply_patch(model, ratio=0.5) to the txt2img.py file and added the dependency to the pyproject.toml file but it doesn't seem to work as it has no effect on the diffusion process. It creates the same image in the same time with or without ToMe (given the same seed). Any ideas whats the problem or what I'm missing?

Additionally it seems like the txt2img file isn't even read as I deliberately tried to run it without an import statement for tomesd and it didn't throw any exceptions or errors.

dbolya commented 1 year ago

Hmm, I'm not familiar with that UI so I can't help you there. It seems like you didn't apply the patch in the right place or the UI doesn't use a supported SD pipeline (either the original SD repo or diffusers).

HNR1 commented 1 year ago

Indeed I put the apply_patch function in the spot. I believe it needs to go in the generate.py file. That results in an error with pytorch mps where it tells me to remedy it with export PYTORCH_ENABLE_MPS_FALLBACK=1.

I switched to HuggingFace Diffusers now so i this issue is closed for me now.

HNR1 commented 1 year ago

If anyone enctounters the same issue, you need to go to invokeai/backend/generate.py and put apply_patch() in the prompt2image method right after the model is instantiated (and then manually set PYTORCH_ENABLE_MPS_FALLBACK=1 if neccessary).