Closed FurkanGozukara closed 1 year ago
You can do:
pipeline = DiffusionPipeline.from_pretrained(ckpt_id, use_safetensors=True, variant="fp16")
to directly load the safetensors and fp16 variants of the checkpoints.
You can do:
pipeline = DiffusionPipeline.from_pretrained(ckpt_id, use_safetensors=True, variant="fp16")
to directly load the safetensors and fp16 variants of the checkpoints.
thank you. how can i add kohya script trained LoRA safetensors file to this pipeline?
Now, that is a bit deviating from the original issue you posted. But we have a document here: https://huggingface.co/docs/diffusers/main/en/training/lora#supporting-a1111-themed-lora-checkpoints-from-diffusers.
We have ongoing threads on Kohya:
So, to centralize the discussions there I am going to close this thread assuming https://github.com/huggingface/diffusers/issues/4029#issuecomment-1630618340 solved your initial query. If not, please feel free reopen.
Please forgive this comment on a closed ticket, but this may be helpful to others who stumbled upon this issue:
For others who got here via Google and are trying to load a safetensors file (like one downloaded from a website that aggregates models), please try this command: pipe = StableDiffusionXLPipeline.from_single_file("/home/you/path/etc/my_sdxl_model_from_civitai.safetensors")
.
If one tries to load a standalone safetensors file with DiffusionPipeline.from_pretrained
, it will show HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/foo/bar.safetensors'. Use `repo_type` argument if needed.
.
@JosephCatrambone What about if use StableDiffusionXLPipeline.from_single_file("/home/you/path/etc/my_sdxl_model_from_civitai.safetensors").
and it download text_encoder exactly? Futhermore, i found the function of from_single_file
hard code the path of text_encoder instead of loading it localy.
@zengjie617789
@JosephCatrambone What about if use
StableDiffusionXLPipeline.from_single_file("/home/you/path/etc/my_sdxl_model_from_civitai.safetensors").
and it download text_encoder exactly?
If the safetensors requires a text_encoder then it will still download. There is a flag to disable this if your system cannot (or should not) connect to the internet while deployed. https://huggingface.co/docs/diffusers/v0.24.0/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.local_files_only
Futhermore, i found the function of
from_single_file
hard code the path of text_encoder instead of loading it localy.
😖 If I am understanding, from_single_file is hard-coding the text_encoder? That is not good. You may be able to load text_encoder separately with https://huggingface.co/docs/diffusers/v0.24.0/en/api/loaders/single_file#diffusers.loaders.FromSingleFileMixin.from_single_file.text_encoder
.
my_text_encoder = load_my_text_encoder_here(...)
diffuser = StableDiffusionXLPipeline.from_single_file(
"/home/yuo/path/etc/my_sdxl_model.safetensors",
use_safetensors=True,
text_encoder=my_text_encoder
)
中文:
如果 local_file_only == False
则 from_single_file 将下载 text_encoder。(如果 text_encoder 不存在。)
如果 model 硬编码 text_encoder,则可以尝试 StableDiffusionXLPipeline.from_single_file(..., text_encoder=...)
。
对不起。 我的中文不好。:')
Cloning entire repo is taking 100 GB
How can I make below code to use .safetensors file instead of diffusers?
Lets say I have downloaded my safetensors file into path.safetensors
How to provide it?
The below code working but we are cloning 100 GB instead of just single 14 GB safetensors. Waste of bandwidth
Also how can I add a LoRA checkpoint to this pipeline? a LoRA checkpoint made by Kohya script