rupeshs / fastsdcpu

Fast stable diffusion on CPU
MIT License
1k stars 87 forks source link

More models and functions #117

Closed Micraow closed 5 months ago

Micraow commented 6 months ago

Well, today I tried to use openvinotoolkit/stable-diffusion-webui because I want to run the model ChilloutMix-Ni-pruned-fp16 and DreamShaper XL, which fastdcpu haven't supported. However, I find it difficult to configure for low-memory and CPU only devices because Dreamshaper XL needs DPM++ SDE Karras which openvinotoolkit/stable-diffusion-webui doesn't support and I encountered an error when using ChilloutMix-Ni-pruned-fp16 .So I hope fastsdcpu can support these two models. Moreover, I hope to use pix2pix and outpainting, wil fastsdcpu make them? Thank you for developing such an easy-to-use program!

rupeshs commented 6 months ago

Not planning to add support for this ChilloutMix NSFW model. Please try DreamShaper XL in the LCM-LoRA mode, select lcm-lcm-lora-sdxl : image

Micraow commented 6 months ago

OK. I have tried that and found my ram is too small. I have 8GB RAM and 13GB SWAP, but it always fails with an OOM. I noticed that you offered an int8 model, and that works well. Could you please share the script to quantize?

P. S. Happy 2024!

rupeshs commented 6 months ago

Happy New Year! You can use lcm-openvino-converter https://github.com/rupeshs/lcm-openvino-converter Also, pass --int8 to optimum-cli command to compress model weight.

Micraow commented 6 months ago

Well, I wonder if there are other ways to prune? like using nncf? And if fastsdcpu supports using other normal LoRa? Also, I have a question on quantization: Should we turn every weight into int8 or we should leave some in fp16?