Open shabri-arrahim opened 5 months ago
How was pipe
initialized?
How was
pipe
initialized?
pipe = StableDiffusionXLPipeline.from_single_file(**model_params) @sayakpaul
Where "playground-v2.5-1024px-aesthetic.fp16.safetensors" is coming from?
Where "playground-v2.5-1024px-aesthetic.fp16.safetensors" is coming from?
I download it via this link: https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic/resolve/main/playground-v2.5-1024px-aesthetic.fp16.safetensors
Cc: @DN6 could you take a look?
I still didn't know why this happen, but at least I know that the text_endocer_2 are failed to be loaded. So if someone experiencing a similar issue like me, you might find this solution helpful (although I must admit it’s not the most efficient approach 🙃).
import torch
from diffusers import StableDiffusionXLPipeline
from diffusers import DPMSolverMultistepScheduler
from transformers import CLIPTextModelWithProjectio
from safetensors.torch import load_file as safe_load
from diffusers.pipelines.stable_diffusion.convert_from_ckpt import convert_open_clip_checkpoint
checkpoint = safe_load("/workspace/playground-v2.5-1024px-aesthetic.fp16.safetensors", device="cpu")
# NOTE: this while loop isn't great but this controlnet checkpoint has one additional
# "state_dict" key https://huggingface.co/thibaud/controlnet-canny-sd21
while "state_dict" in checkpoint:
checkpoint = checkpoint["state_dict"]
config_name = "laion/CLIP-ViT-bigG-14-laion2B-39B-b160k"
config_kwargs = {"projection_dim": 1280}
text_encoder_2 = convert_open_clip_checkpoint(
checkpoint,
config_name,
prefix="conditioner.embedders.1.model.",
has_projection=True,
local_files_only=False,
**config_kwargs,
)
model_params = {
"pretrained_model_link_or_path": "/workspace/playground-v2.5-1024px-aesthetic.fp16.safetensors",
"torch_dtype": torch.float16,
"use_safetensors": True,
"add_watermarker": False,
}
pipe = StableDiffusionXLPipeline.from_single_file(**model_params)
pipe.text_encoder_2 = text_encoder_2
pipe.save_pretrained(
save_directory="/workspace/playground-v2.5.fp16",
safe_serialization=True,
variant="fp16",
push_to_hub=False,
)
Cc: @DN6 could you take a look?
Hmm strange. Some tensors are not being saved in the OpenCLIP model when calling save_pretrained
. Taking a look.
Hi @shabri-arrahim tracked the issue down to these lines https://github.com/huggingface/diffusers/blob/2b04ec2ff7270d2044410378b04d85a194fa3d4a/src/diffusers/loaders/single_file_utils.py#L1238-L1240
When accelerate is installed and saving to safetensors, we attempt to save those weights as shared tensors (which the safetensor format currently doesn't support) and so they are omitted and saved as meta tensors, which leads to the error when you try loading the model.
I'll include a fix for this is in the #7496
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
hello i use everydream2 train a model, but i can't load trained model
before convert
Please load the component before passing it in as an argument to from_single_file
.
text_encoder = CLIPTextModel.from_pretrained('...')
pipe = StableDiffusionControlNetPipeline.from_single_file(
after convert
convert doesn't work too
NotImplementedError: Cannot copy out of meta tensor; no data!
@crapthings could you create a separate issue please with a reproducible code example (no screenshots). Not sure if your problem is related.
@crapthings could you create a separate issue please with a reproducible code example (no screenshots). Not sure if your problem is related.
revert to 0.27.2 works
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Describe the bug
I try to load a .safetensors file and save it as diffusers type model and I got Some weights of the model checkpoint were not used when initializing CLIPTextModelWithProjection: ['text_model.embeddings.position_ids'] warning
When I try to load it, I got NotImplementedError: Cannot copy out of meta tensor; no data! error
Reproduction
Logs
System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
diffusers
version: 0.27.0Who can help?
@yiyixuxu @sayakpaul @DN6