Closed n00mkrad closed 1 year ago
For context, this is my ComfyUI setup which appears to load the CLIP model from the SDXL Safetensors files (?) instead of downloading laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
.
you can do the same thing using DiffusionPipeline.from_single_file('/path/to/file.ckpt', use_safetensors=True)
you can do the same thing using
DiffusionPipeline.from_single_file('/path/to/file.ckpt', use_safetensors=True)
NameError: name 'DiffusionPipeline' is not defined
If I import it:
AttributeError: type object 'DiffusionPipeline' has no attribute 'from_single_file'
Using from_pretrained
instead won't allow me to load from safetensors:
HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'M:/Weights/SD/XL/sd_xl_base_0.9.safetensors'. Use repo_type argument if needed.
@patrickvonplaten may have only added the SingleFileMixin to the SDXL pipeline.
Try using StableDiffusionXLPipeline
directly.
Try using
StableDiffusionXLPipeline
directly.
What do you mean by "directly"?
This is my code:
from diffusers import StableDiffusionXLPipeline
import torch
import sys
pipe = StableDiffusionXLPipeline.from_single_file("M:/Weights/SD/XL/sd_xl_base_0.9.safetensors", torch_dtype=torch.float16)
pipe.to("cuda")
prompt = "astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt=prompt).images[0]
image.save(f"{sys.path[0]}/sdxl-test.png")
And this tries to download CLIP-ViT-bigG-14-laion2B-39B-b160k
which I want to avoid, because it's 10GB, and ComfyUI works without this model so it must be possible.
Hey @n00mkrad,
Could you please load the model as described here: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#texttoimage
I'm working on improving the from_single_File
loading functionality
Hey @n00mkrad,
Could you please load the model as described here: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#texttoimage
I'm working on improving the
from_single_File
loading functionality
Yep, from_pretrained
works, it will not attempt to download laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
.
from_single_File
btw also eats a ton of RAM when first loading the model (maxed out 32GB). Guess it's the conversion that's causing issues.
Hey @n00mkrad, Could you please load the model as described here: https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#texttoimage I'm working on improving the
from_single_File
loading functionalityYep,
from_pretrained
works, it will not attempt to downloadlaion/CLIP-ViT-bigG-14-laion2B-39B-b160k
.
from_single_File
btw also eats a ton of RAM when first loading the model (maxed out 32GB). Guess it's the conversion that's causing issues.
32 gb huge I am also interested in single safetensors file load
So, it seems like from_single_File
is where the issue is. Could we maybe edit the original post to make that a bit clearer?
Working on improving it
@n00mkrad,
Let me know if we can close this issue now that #4041 is merged
@n00mkrad,
Let me know if we can close this issue now that #4041 is merged
Negative. Still attempts to download CLIP-ViT-bigG-14-laion2B-39B-b160k
.
It really shouldn't :sweat_smile: Can you copy-paste your diffusers
versions here?
It really shouldn't 😅 Can you copy-paste your
diffusers
versions here?
I just updated to the latest master, it works now.
However, conversion temporarily eats up about 34 GB RAM, is this expected behavior?
can't memory-map the old checkpoint style, but i'm not sure if that's the specific reason.
It really shouldn't sweat_smile Can you copy-paste your
diffusers
versions here?I just updated to the latest master, it works now.
However, conversion temporarily eats up about 34 GB RAM, is this expected behavior?
CPU VRAM no? Yeah that doesn't shock me too much since we're working in fp32 precision
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
I have download it local. Does anyone know which folder to put it. I was working on cloud end, and the cloud end cannot connect to huggingface.
Describe the bug
Running
StableDiffusionXLPipeline
downloadslaion/CLIP-ViT-bigG-14-laion2B-39B-b160k
, which is about 10 GB in size.Is it possible to avoid this big download?
ComfyUI, for example, seems to be able to run SDXL without this huge CLIP model.
Reproduction
Run this code example:
https://github.com/huggingface/diffusers/releases/tag/v0.18.1
The script will load the SD model, then download
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
into the default HF cache directory.Logs
System Info
Diffusers 0.18.1 Windows 10 64-bit Python 3.10.7 Pytorch 2.0.1
Who can help?
No response