Closed gigadeplex closed 1 year ago
Hi, sorry for not including that important detail in the README file, but sure, you can use it for 1.5. SD 1.5 uses the text encoder of CLIP ViT-L/14 from OpenAI, so you can just simply change the argument by: args.clip_model = "ViT-L-14"
and args.clip_pretrain = "openai"
. Also, if you are using the Jupyter notebook, you may also want to change model_id = "runwayml/stable-diffusion-v1-5"
when loading the stable diffusion model.
Let me know if you have further questions!
Hi, sorry for not including that important detail in the README file, but sure, you can use it for 1.5. SD 1.5 uses the text encoder of CLIP ViT-L/14 from OpenAI, so you can just simply change the argument by:
args.clip_model = "ViT-L-14"
andargs.clip_pretrain = "openai"
. Also, if you are using the Jupyter notebook, you may also want to changemodel_id = "runwayml/stable-diffusion-v1-5"
when loading the stable diffusion model.Let me know if you have further questions!
where? in which file?
Hi, is this only for SD 2.1, can this be used for 1.5? I guess, I can just switch the clip model, right?