YuxinWenRick / hard-prompts-made-easy

MIT License
580 stars 54 forks source link

Only for 2.1? #1

Closed gigadeplex closed 1 year ago

gigadeplex commented 1 year ago

Hi, is this only for SD 2.1, can this be used for 1.5? I guess, I can just switch the clip model, right?

YuxinWenRick commented 1 year ago

Hi, sorry for not including that important detail in the README file, but sure, you can use it for 1.5. SD 1.5 uses the text encoder of CLIP ViT-L/14 from OpenAI, so you can just simply change the argument by: args.clip_model = "ViT-L-14" and args.clip_pretrain = "openai". Also, if you are using the Jupyter notebook, you may also want to change model_id = "runwayml/stable-diffusion-v1-5" when loading the stable diffusion model.

Let me know if you have further questions!

demirklvc commented 1 year ago

Hi, sorry for not including that important detail in the README file, but sure, you can use it for 1.5. SD 1.5 uses the text encoder of CLIP ViT-L/14 from OpenAI, so you can just simply change the argument by: args.clip_model = "ViT-L-14" and args.clip_pretrain = "openai". Also, if you are using the Jupyter notebook, you may also want to change model_id = "runwayml/stable-diffusion-v1-5" when loading the stable diffusion model.

Let me know if you have further questions!

where? in which file?