IrisRainbowNeko / HCP-Diffusion

A universal Stable-Diffusion toolbox
Apache License 2.0
893 stars 75 forks source link

text2img_sdxl.yaml 推理报错 #64

Closed 39MOMO39 closed 6 months ago

39MOMO39 commented 7 months ago

我并不是专业人士,这个报错让我很困惑,希望知道如何解决

(HCP) C:\webui_git\HCP-Diffusion>python -m hcpdiff.visualizer --cfg cfgs/infer/text2img.yaml pretrained_model=stabilityai/stable-diffusion-xl-base-1.0 prompt=1girl neg_prompt=bad seed=42 2023-12-26 14:43:07.146482: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. WARNING:tensorflow:From C:\Users\momo\AppData\Local\anaconda3\envs\HCP\lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

C:\Users\momo\AppData\Local\anaconda3\envs\HCP\lib\runpy.py:126: RuntimeWarning: 'hcpdiff.visualizer' found in sys.modules after import of package 'hcpdiff', but prior to execution of 'hcpdiff.visualizer'; this may result in unpredictable behaviour warn(RuntimeWarning(msg)) You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:01<00:00, 2.98it/s] C:\Users\momo\AppData\Local\anaconda3\envs\HCP\lib\site-packages\diffusers\pipelines\pipeline_utils.py:761: FutureWarning: torch_dtype is deprecated and will be removed in version 0.25.0. deprecate("torch_dtype", "0.25.0", "") 2023-12-26 14:43:18.309 | INFO | hcpdiff.models.compose.compose_hook:hook:49 - compose hook: clip_B 2023-12-26 14:43:18.312 | INFO | hcpdiff.models.text_emb_ex:hook:86 - hook: hatsune_miku_bluearchive, len: 4, id: 49408 2023-12-26 14:43:18.312 | INFO | hcpdiff.models.compose.compose_hook:hook:49 - compose hook: clip_bigG 2023-12-26 14:43:18.314 | INFO | hcpdiff.models.text_emb_ex:hook:86 - hook: hatsune_miku_bluearchive, len: 4, id: 49408 Traceback (most recent call last): File "C:\Users\momo\AppData\Local\anaconda3\envs\HCP\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\momo\AppData\Local\anaconda3\envs\HCP\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\webui_git\HCP-Diffusion\hcpdiff\visualizer.py", line 257, in viser.vis_to_dir(prompt=prompt, negative_prompt=negative_prompt, File "C:\webui_git\HCP-Diffusion\hcpdiff\visualizer.py", line 235, in vis_to_dir images = self.vis_images(prompt, negative_prompt, seeds=seeds, *kwargs) File "C:\Users\momo\AppData\Local\anaconda3\envs\HCP\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, *kwargs) File "C:\webui_git\HCP-Diffusion\hcpdiff\visualizer.py", line 196, in vis_images emb, pooled_output, attention_mask = self.te_hook.encode_prompt_to_emb(clean_text_n+clean_text_p) File "C:\webui_git\HCP-Diffusion\hcpdiff\models\compose\compose_hook.py", line 104, in encode_prompt_to_emb encoder_hidden_states, pooled_output = list(zip(emb_list)) ValueError: too many values to unpack (expected 2)