Open oftenliu opened 8 months ago
Hi:
I think you are trying to load a sdxl checkpoint with StableDiffusionControlNetPipeline
can you try use StableDiffusionXLControlNetPipeline
instead?
Hi:
I think you are trying to load a sdxl checkpoint with
StableDiffusionControlNetPipeline
can you try useStableDiffusionXLControlNetPipeline
instead?
oh,thks, i tryed it, error happens: TypeError: StableDiffusionXLControlNetPipeline.init() missing 2 required positional arguments: 'text_encoder_2' and 'tokenizer_2'.
my base model is majicmix-realistic and download from https://civitai.com/models/43331/majicmix-realistic, when use replace base model with runwayml/stable-diffusion-v1-5 from huggingface, the same error happens,the log like this:
image = pipe(
File "/root/miniconda3/envs/compy/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/miniconda3/envs/compy/lib/python3.10/site-packages/diffusers/pipelines/controlnet/pipeline_controlnet.py", line 1234, in __call__
down_block_res_samples, mid_block_res_sample = self.controlnet(
File "/root/miniconda3/envs/compy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/compy/lib/python3.10/site-packages/diffusers/models/controlnet.py", line 775, in forward
if "text_embeds" not in added_cond_kwargs:
TypeError: argument of type 'NoneType' is not iterable
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
你好: 我认为您正在尝试加载 sdxl 检查点,您可以尝试使用吗?
StableDiffusionControlNetPipeline``StableDiffusionXLControlNetPipeline
哦,谢谢,我试过了,发生错误:TypeError:StableDiffusionXLControlNetPipeline。init() 缺少 2 个必需的位置参数:“text_encoder_2”和“tokenizer_2”。
我的基本模型是 Majicmix-REALISTIC 并从 https://civitai.com/models/43331/majicmix-realistic 下载,当使用 huggingface 的 runwayml/stable-diffusion-v1-5 替换基本模型时,会发生同样的错误,日志如下:
image = pipe( File "/root/miniconda3/envs/compy/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/root/miniconda3/envs/compy/lib/python3.10/site-packages/diffusers/pipelines/controlnet/pipeline_controlnet.py", line 1234, in __call__ down_block_res_samples, mid_block_res_sample = self.controlnet( File "/root/miniconda3/envs/compy/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/compy/lib/python3.10/site-packages/diffusers/models/controlnet.py", line 775, in forward if "text_embeds" not in added_cond_kwargs: TypeError: argument of type 'NoneType' is not iterable
Hello, how did you solve this problem?
This is an old issue and the OP had a number of problems with the code. Can you post the code you're using to load the model?
Hello, how did you solve this problem?
This is an old issue and the OP had a number of problems with the code. Can you post the code you're using to load the model?
scheduler = EulerDiscreteScheduler.from_pretrained("./sdxl-turbo", subfolder="scheduler") pipe = AutoPipelineForText2Image.from_pretrained("./sdxl-turbo", scheduler=scheduler,torch_dtype=torch.float16, variant="fp16") noise_pred = unet(noised_latent, t_input, encoder_hidden_states=text_input,).sample
noise_pred = unet(noised_latent, t_input, encoder_hidden_states=text_input,).sample
File "/root/anaconda3/envs/diffusion-classifier/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/root/anaconda3/envs/diffusion-classifier/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(args, **kwargs) File "/root/anaconda3/envs/diffusion-classifier/lib/python3.9/site-packages/diffusers/models/unets/unet_2d_condition.py", line 1156, in forward aug_emb = self.get_aug_embed( File "/root/anaconda3/envs/diffusion-classifier/lib/python3.9/site-packages/diffusers/models/unets/unet_2d_condition.py", line 977, in get_aug_embed if "text_embeds" not in added_cond_kwargs: TypeError: argument of type 'NoneType' is not iterable
that's not a reproducible code, I need a fully reproducible code or I won't know where the problem is.
for example:
import torch
from diffusers import AutoPipelineForText2Image
pipe = AutoPipelineForText2Image.from_pretrained(
"stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16"
).to("cuda")
image = pipe(
prompt="a car",
negative_prompt="",
guidance_scale=1.0,
num_inference_steps=1,
).images[0]
Works without a problem.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Describe the bug
when i use StableDiffusionControlNetPipeline by loading local model to generate image,error happen in controlnet.py line 775 : if "text_embeds" not in added_cond_kwargs:
Reproduction
import torch import numpy as np from controlnet_aux import MidasDetector from diffusers.utils import load_image from diffusers import ControlNetModel from diffusers import StableDiffusionControlNetPipeline from PIL import Image
def resize_img(input_image, max_side=1280, min_side=1024, size=None,pad_to_max_side=False,mode=Image.BILINEAR, base_pixel_number=64):
device = "cuda" if torch.cuda.is_available() else "cpu"
midas = MidasDetector.from_pretrained( "model path")
controlnet_depth_path = f'' controlnet = ControlNetModel.from_pretrained( controlnet_depth_path, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained( "model path/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 ).to("cuda") ''' pipe = StableDiffusionControlNetPipeline.from_single_file( base_model_path, original_config_file='v1-inference.yaml', local_files_only=True, controlnet=controlnet, torch_dtype=torch.float16, safety_checker=None, requires_safety_checker=False, ).to("cuda") ''' prompt = '(Masterpiece,best quality:1.4),movie lighting,color,high contrast,girls' n_prompt = 'ext,logo,badhandv4,EasyNegative,ng_deepnegative_v1_75t,rev2-badprompt,verybadimagenegative_v1.3,logo,text'
image = load_image("image.png") image = resize_img(image)
processed_image_midas = midas(image) processed_image_midas = processed_image_midas.resize(image.size) print(processed_image_midas.size) generator = torch.manual_seed(0) image = pipe( prompt=prompt, negative_prompt=n_prompt, num_inference_steps=20, generator=generator, image=processed_image_midas ).images[0]
image.save('result.jpg')
Logs
System Info
Ubuntu 18.04.6 Diffusers 0.26.2
Who can help?
No response