smthemex / ComfyUI_StoryDiffusion

You can using StoryDiffusion in ComfyUI
Apache License 2.0
243 stars 195 forks source link

about “Loaded EVA02-CLIP-L-14-336 model config.” #89

Open tiandaoyuxi opened 1 month ago

tiandaoyuxi commented 1 month ago

Loading AE Loaded EVA02-CLIP-L-14-336 model config. Shape of rope freq: torch.Size([576, 64]) Loading pretrained EVA02-CLIP-L-14-336 weights (D:\Comfy_UI\ComfyUI\models\clip_vision\EVA02_CLIP_L_336_psz14_s6B.pt). incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin'] !!! Exception during processing !!! Traceback (most recent call last): File "D:\Comfy_UI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Comfy_UI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1790, in story_model_loader pipe = FluxGenerator(flux_pulid_name, ckpt_path, "cuda", offload=offload, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\PuLID\app_flux.py", line 46, in init self.pulid_model = PuLIDPipeline(self.model, device, self.clip_vision_path,weight_dtype=torch.bfloat16) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\PuLID\pulid\pipeline_flux.py", line 161, in init self.app = FaceAnalysis( ^^^^^^^^^^^^^ File "D:\Comfy_UI\python_embeded\Lib\site-packages\insightface\app\face_analysis.py", line 43, in init assert 'detection' in self.models ^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError

Minamiyama commented 1 month ago

大概是模型路径不对:https://github.com/smthemex/ComfyUI_StoryDiffusion/issues/77#issuecomment-2376678923

tiandaoyuxi commented 1 month ago

image

tiandaoyuxi commented 1 month ago

got prompt clip missing: ['text_projection.weight'] run in id number : 1 Loading pipeline components...: 67%|██████████████████████████████████▋ | 4/6 [00:01<00:00, 3.07it/s] Loading checkpoint shards: 50%|████████████████████████████▌ | 1/2 [00:06<00:06, Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:12<00:00, 6.36s/it] Loading pipeline components...: 83%|███████████████████████████████████████████▎ | 5/6 [00:14<00:03, Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:14<00:00, Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:14<00:00, 2.39s/it] !!! Exception during processing !!! Pipeline <class 'ComfyUI_StoryDiffusion.utils.pipeline_v2.PhotoMakerStableDiffusionXLPipeline'> expected {'tokenizer_2', 'tokenizer', 'scheduler', 'feature_extractor', 'text_encoder', 'text_encoder_2', 'image_encoder', 'vae', 'unet'}, but only {'tokenizer_2', 'tokenizer', 'scheduler', 'text_encoder', 'text_encoder_2', 'vae'} were passed. Traceback (most recent call last): File "D:\Comfy_UI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Comfy_UI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(*inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 2065, in story_model_loader pipe = load_models(repo_id, model_type=model_type, single_files=False, use_safetensors=True, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\utils\load_models_utils.py", line 115, in load_models pipe = PhotoMakerStableDiffusionXLPipelineV2.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\python_embeded\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 961, in from_pretrained raise ValueError( ValueError: Pipeline <class 'ComfyUI_StoryDiffusion.utils.pipeline_v2.PhotoMakerStableDiffusionXLPipeline'> expected {'tokenizer_2', 'tokenizer', 'scheduler', 'feature_extractor', 'text_encoder', 'text_encoder_2', 'image_encoder', 'vae', 'unet'}, but only {'tokenizer_2', 'tokenizer', 'scheduler', 'text_encoder', 'text_encoder_2', 'vae'} were passed.

tiandaoyuxi commented 1 month ago

D:\Comfy_UI\ComfyUI\models\antelopev2>dir 驱动器 D 中的卷是 SSD 卷的序列号是 18C2-EDDA

D:\Comfy_UI\ComfyUI\models\antelopev2 的目录

2024/08/28 15:31

. 2024/09/26 13:51 .. 2024/08/28 15:27 143,607,619 1k3d68.onnx 2024/08/28 15:26 5,030,888 2d106det.onnx 2024/08/28 15:30 360,662,982 antelopev2.zip 2024/08/28 15:26 1,322,532 genderage.onnx 2024/08/28 15:28 260,665,334 glintr100.onnx 2024/08/28 15:26 16,923,827 scrfd_10g_bnkps.onnx 6 个文件 788,213,182 字节 2 个目录 35,988,631,552 可用字节

Minamiyama commented 1 month ago

在antelopev2目录里再建一个antelopev2目录,然后把外面的那些onnx拖进去或者复制一份进去,D:\Comfy_UI\ComfyUI\models\antelopev2\antelopev2

smthemex commented 1 month ago

模型路径按https://github.com/smthemex/ComfyUI_StoryDiffusion/issues/89#issuecomment-2381256122 说的来, onnx loca base https://github.com/smthemex/ComfyUI_StoryDiffusion/issues/89#issuecomment-2381256122

Minamiyama commented 1 month ago

image

tiandaoyuxi commented 1 month ago

image I still have this problem,Thx!

smthemex commented 1 month ago

我看了下代码,可能是我写重复了ORZ,我先修改一下

tiandaoyuxi commented 1 month ago

我看了下代码,可能是我写重复了ORZ,我先修改一下

感谢大佬 !我一直想复现一直没成功。

cubiq / PuLID_ComfyUI
他说要出 Flux for comfyui 也没等到 靠你了 !

smthemex commented 1 month ago

我看了下载路径,你试试在comfyUI\models\insightface\models\antelopev2 目录下放那几个模型跑一下,帮我测试一下

tiandaoyuxi commented 1 month ago

我看了下载路径,你试试在comfyUI\models\insightface\models\antelopev2 目录下放那几个模型跑一下,帮我测试一下 image Prompt executed in 8.80 seconds got prompt run in id number : 1 Loading checkpoint shards: 100%|███████████████████████████████████████████████| 2/2 [00:07<00:00, 3.71s/it] Loading pipeline components...: 100%|██████████████████████████████████████████| 6/6 [00:08<00:00, 1.45s/it] !!! Exception during processing !!! Pipeline <class 'ComfyUI_StoryDiffusion.utils.pipeline_v2.PhotoMakerStableDiffusionXLPipeline'> expected {'tokenizer_2', 'tokenizer', 'scheduler', 'feature_extractor', 'text_encoder', 'text_encoder_2', 'image_encoder', 'vae', 'unet'}, but only {'tokenizer_2', 'tokenizer', 'scheduler', 'text_encoder', 'text_encoder_2', 'vae'} were passed. Traceback (most recent call last): File "D:\Comfy_UI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Comfy_UI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(*inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 2065, in story_model_loader pipe = load_models(repo_id, model_type=model_type, single_files=False, use_safetensors=True, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\utils\load_models_utils.py", line 115, in load_models pipe = PhotoMakerStableDiffusionXLPipelineV2.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn got prompt fn(args, kwargs) 100%|████████████████████████████████████████████████████████████████████████| 50/50 [00:33<00:00, 1.47it/s] Prompt executed in 34.52 secondsed\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 961, in from got promptd clip missing: ['text_projection.weight'] run in id number : 1<class 'ComfyUI_StoryDiffusion.utils.pipeline_v2.PhotoMakerStableDiffusionXLPipeline'> ex !!! Exception during processing !!! exceptions must derive from BaseExceptionncoder', 'text_encoder_2', 'image Traceback (most recent call last):y {'tokenizer_2', 'tokenizer', 'scheduler', 'text_encoder', 'text_encoder_2' File "D:\Comfy_UI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^re it is not associated with a value File "D:\Comfy_UI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Comfy_UI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1783, in story_model_loader██████████████████████████████████████████████████████████████████| 50/50 [00:33<00:00, 1.50it/s] raise "need 'EVA02_CLIP_L_336_psz14_s6B.pt' in comfyUI/models/clip_vision" TypeError: exceptions must derive from BaseException loaded completely 0.0 159.87335777282715 True Prompt executed in 5.22 secondss got prompt cannot access local variable 'translate_text_prompt' where it is not associated with a value 100%|████████████████████████████████████████████████████████████████████████| 50/50 [00:33<00:00, 1.51it/s] Prompt executed in 35.47 seconds

image

tiandaoyuxi commented 1 month ago

got prompt run in id number : 1ariable 'translate_text_prompt' where it is not associated with a value !!! Exception during processing !!! exceptions must derive from BaseException| 50/50 [00:33<00:00, 1.51it/s] Traceback (most recent call last): File "D:\Comfy_UI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Comfy_UI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 1783, in story_model_loader raise "need 'EVA02_CLIP_L_336_psz14_s6B.pt' in comfyUI/models/clip_vision" TypeError: exceptions must derive from BaseException

Prompt executed in 0.08 seconds

tiandaoyuxi commented 1 month ago

我需要去 git pull 吗 ?

tiandaoyuxi commented 1 month ago

我需要去 git pull 吗 ?

D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion>git pull Already up to date. 没有发现代码有更新,报错在上面 有必要可以远程我电脑检查看看 十分感谢 弄弄了几天都没成功 。

tiandaoyuxi commented 1 month ago

image 指定repo_ID 报错如下: got prompt run in id number : 1 Loading checkpoint shards: 100%|███████████████████████████████████████████████| 2/2 [00:07<00:00, 3.78s/it] Loading pipeline components...: 100%|██████████████████████████████████████████| 6/6 [00:08<00:00, 1.44s/it] !!! Exception during processing !!! Pipeline <class 'ComfyUI_StoryDiffusion.utils.pipeline_v2.PhotoMakerStableDiffusionXLPipeline'> expected {'tokenizer_2', 'tokenizer', 'scheduler', 'feature_extractor', 'text_encoder', 'text_encoder_2', 'image_encoder', 'vae', 'unet'}, but only {'tokenizer_2', 'tokenizer', 'scheduler', 'text_encoder', 'text_encoder_2', 'vae'} were passed. Traceback (most recent call last): File "D:\Comfy_UI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Comfy_UI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(*inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\Storydiffusion_node.py", line 2065, in story_model_loader pipe = load_models(repo_id, model_type=model_type, single_files=False, use_safetensors=True, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\ComfyUI\custom_nodes\ComfyUI_StoryDiffusion\utils\load_models_utils.py", line 115, in load_models pipe = PhotoMakerStableDiffusionXLPipelineV2.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "D:\Comfy_UI\python_embeded\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 961, in from_pretrained raise ValueError( ValueError: Pipeline <class 'ComfyUI_StoryDiffusion.utils.pipeline_v2.PhotoMakerStableDiffusionXLPipeline'> expected {'tokenizer_2', 'tokenizer', 'scheduler', 'feature_extractor', 'text_encoder', 'text_encoder_2', 'image_encoder', 'vae', 'unet'}, but only {'tokenizer_2', 'tokenizer', 'scheduler', 'text_encoder', 'text_encoder_2', 'vae'} were passed.

Prompt executed in 8.68 seconds

smthemex commented 1 month ago

路径名是错的 要写D:/xxx/xxxx ,然后你上一个报错是 没有在clip-vision选模型VA02-CLIP-L-14-336 模型,这个版本用pulid 是不需要填写repo的

tiandaoyuxi commented 1 month ago

路径名是错的 要写D:/xxx/xxxx , 理解win的问题复制的是\ 然后你上一个报错是 没有在clip-vision选模型VA02-CLIP-L-14-336 模型,我去补上它 这个版本用pulid 是不需要填写repo的好的 image 全程满负荷 4090 我去继续等待这次没有报错。

got prompt run in id number : 1 Init model in fp8 loading weight_dtype is torch.float8_e4m3fn normal loading ... Got 780 missing keys: img_in.weight img_in.bias time_in.in_layer.weight time_in.in_layer.bias time_in.out_layer.weight time_in.out_layer.bias vector_in.in_layer.weight vector_in.in_layer.bias vector_in.out_layer.weight vector_in.out_layer.bias guidance_in.in_layer.weight guidance_in.in_layer.bias guidance_in.out_layer.weight guidance_in.out_layer.bias txt_in.weight txt_in.bias double_blocks.0.img_mod.lin.weight double_blocks.0.img_mod.lin.bias double_blocks.0.img_attn.qkv.weight double_blocks.0.img_attn.qkv.bias double_blocks.0.img_attn.norm.query_norm.scale double_blocks.0.img_attn.norm.key_norm.scale double_blocks.0.img_attn.proj.weight double_blocks.0.img_attn.proj.bias double_blocks.0.img_mlp.0.weight double_blocks.0.img_mlp.0.bias double_blocks.0.img_mlp.2.weight double_blocks.0.img_mlp.2.bias double_blocks.0.txt_mod.lin.weight double_blocks.0.txt_mod.lin.bias double_blocks.0.txt_attn.qkv.weight double_blocks.0.txt_attn.qkv.bias double_blocks.0.txt_attn.norm.query_norm.scale double_blocks.0.txt_attn.norm.key_norm.scale double_blocks.0.txt_attn.proj.weight double_blocks.0.txt_attn.proj.bias double_blocks.0.txt_mlp.0.weight double_blocks.0.txt_mlp.0.bias double_blocks.0.txt_mlp.2.weight double_blocks.0.txt_mlp.2.bias double_blocks.1.img_mod.lin.weight double_blocks.1.img_mod.lin.bias double_blocks.1.img_attn.qkv.weight double_blocks.1.img_attn.qkv.bias double_blocks.1.img_attn.norm.query_norm.scale double_blocks.1.img_attn.norm.key_norm.scale double_blocks.1.img_attn.proj.weight double_blocks.1.img_attn.proj.bias double_blocks.1.img_mlp.0.weight double_blocks.1.img_mlp.0.bias double_blocks.1.img_mlp.2.weight double_blocks.1.img_mlp.2.bias double_blocks.1.txt_mod.lin.weight double_blocks.1.txt_mod.lin.bias double_blocks.1.txt_attn.qkv.weight double_blocks.1.txt_attn.qkv.bias double_blocks.1.txt_attn.norm.query_norm.scale double_blocks.1.txt_attn.norm.key_norm.scale double_blocks.1.txt_attn.proj.weight double_blocks.1.txt_attn.proj.bias double_blocks.1.txt_mlp.0.weight double_blocks.1.txt_mlp.0.bias double_blocks.1.txt_mlp.2.weight double_blocks.1.txt_mlp.2.bias double_blocks.2.img_mod.lin.weight double_blocks.2.img_mod.lin.bias double_blocks.2.img_attn.qkv.weight double_blocks.2.img_attn.qkv.bias double_blocks.2.img_attn.norm.query_norm.scale double_blocks.2.img_attn.norm.key_norm.scale double_blocks.2.img_attn.proj.weight double_blocks.2.img_attn.proj.bias double_blocks.2.img_mlp.0.weight double_blocks.2.img_mlp.0.bias double_blocks.2.img_mlp.2.weight double_blocks.2.img_mlp.2.bias double_blocks.2.txt_mod.lin.weight double_blocks.2.txt_mod.lin.bias double_blocks.2.txt_attn.qkv.weight double_blocks.2.txt_attn.qkv.bias double_blocks.2.txt_attn.norm.query_norm.scale double_blocks.2.txt_attn.norm.key_norm.scale double_blocks.2.txt_attn.proj.weight double_blocks.2.txt_attn.proj.bias double_blocks.2.txt_mlp.0.weight double_blocks.2.txt_mlp.0.bias double_blocks.2.txt_mlp.2.weight double_blocks.2.txt_mlp.2.bias double_blocks.3.img_mod.lin.weight double_blocks.3.img_mod.lin.bias double_blocks.3.img_attn.qkv.weight double_blocks.3.img_attn.qkv.bias double_blocks.3.img_attn.norm.query_norm.scale double_blocks.3.img_attn.norm.key_norm.scale double_blocks.3.img_attn.proj.weight double_blocks.3.img_attn.proj.bias double_blocks.3.img_mlp.0.weight double_blocks.3.img_mlp.0.bias double_blocks.3.img_mlp.2.weight double_blocks.3.img_mlp.2.bias double_blocks.3.txt_mod.lin.weight double_blocks.3.txt_mod.lin.bias double_blocks.3.txt_attn.qkv.weight double_blocks.3.txt_attn.qkv.bias double_blocks.3.txt_attn.norm.query_norm.scale double_blocks.3.txt_attn.norm.key_norm.scale double_blocks.3.txt_attn.proj.weight double_blocks.3.txt_attn.proj.bias double_blocks.3.txt_mlp.0.weight double_blocks.3.txt_mlp.0.bias double_blocks.3.txt_mlp.2.weight double_blocks.3.txt_mlp.2.bias double_blocks.4.img_mod.lin.weight double_blocks.4.img_mod.lin.bias double_blocks.4.img_attn.qkv.weight double_blocks.4.img_attn.qkv.bias double_blocks.4.img_attn.norm.query_norm.scale double_blocks.4.img_attn.norm.key_norm.scale double_blocks.4.img_attn.proj.weight double_blocks.4.img_attn.proj.bias double_blocks.4.img_mlp.0.weight double_blocks.4.img_mlp.0.bias double_blocks.4.img_mlp.2.weight double_blocks.4.img_mlp.2.bias double_blocks.4.txt_mod.lin.weight double_blocks.4.txt_mod.lin.bias double_blocks.4.txt_attn.qkv.weight double_blocks.4.txt_attn.qkv.bias double_blocks.4.txt_attn.norm.query_norm.scale double_blocks.4.txt_attn.norm.key_norm.scale double_blocks.4.txt_attn.proj.weight double_blocks.4.txt_attn.proj.bias double_blocks.4.txt_mlp.0.weight double_blocks.4.txt_mlp.0.bias double_blocks.4.txt_mlp.2.weight double_blocks.4.txt_mlp.2.bias double_blocks.5.img_mod.lin.weight double_blocks.5.img_mod.lin.bias double_blocks.5.img_attn.qkv.weight double_blocks.5.img_attn.qkv.bias double_blocks.5.img_attn.norm.query_norm.scale double_blocks.5.img_attn.norm.key_norm.scale double_blocks.5.img_attn.proj.weight double_blocks.5.img_attn.proj.bias double_blocks.5.img_mlp.0.weight double_blocks.5.img_mlp.0.bias double_blocks.5.img_mlp.2.weight double_blocks.5.img_mlp.2.bias double_blocks.5.txt_mod.lin.weight double_blocks.5.txt_mod.lin.bias double_blocks.5.txt_attn.qkv.weight double_blocks.5.txt_attn.qkv.bias double_blocks.5.txt_attn.norm.query_norm.scale double_blocks.5.txt_attn.norm.key_norm.scale double_blocks.5.txt_attn.proj.weight double_blocks.5.txt_attn.proj.bias double_blocks.5.txt_mlp.0.weight double_blocks.5.txt_mlp.0.bias double_blocks.5.txt_mlp.2.weight double_blocks.5.txt_mlp.2.bias double_blocks.6.img_mod.lin.weight double_blocks.6.img_mod.lin.bias double_blocks.6.img_attn.qkv.weight double_blocks.6.img_attn.qkv.bias double_blocks.6.img_attn.norm.query_norm.scale double_blocks.6.img_attn.norm.key_norm.scale double_blocks.6.img_attn.proj.weight double_blocks.6.img_attn.proj.bias double_blocks.6.img_mlp.0.weight double_blocks.6.img_mlp.0.bias double_blocks.6.img_mlp.2.weight double_blocks.6.img_mlp.2.bias double_blocks.6.txt_mod.lin.weight double_blocks.6.txt_mod.lin.bias double_blocks.6.txt_attn.qkv.weight double_blocks.6.txt_attn.qkv.bias double_blocks.6.txt_attn.norm.query_norm.scale double_blocks.6.txt_attn.norm.key_norm.scale double_blocks.6.txt_attn.proj.weight double_blocks.6.txt_attn.proj.bias double_blocks.6.txt_mlp.0.weight double_blocks.6.txt_mlp.0.bias double_blocks.6.txt_mlp.2.weight double_blocks.6.txt_mlp.2.bias double_blocks.7.img_mod.lin.weight double_blocks.7.img_mod.lin.bias double_blocks.7.img_attn.qkv.weight double_blocks.7.img_attn.qkv.bias double_blocks.7.img_attn.norm.query_norm.scale double_blocks.7.img_attn.norm.key_norm.scale double_blocks.7.img_attn.proj.weight double_blocks.7.img_attn.proj.bias double_blocks.7.img_mlp.0.weight double_blocks.7.img_mlp.0.bias double_blocks.7.img_mlp.2.weight double_blocks.7.img_mlp.2.bias double_blocks.7.txt_mod.lin.weight double_blocks.7.txt_mod.lin.bias double_blocks.7.txt_attn.qkv.weight double_blocks.7.txt_attn.qkv.bias double_blocks.7.txt_attn.norm.query_norm.scale double_blocks.7.txt_attn.norm.key_norm.scale double_blocks.7.txt_attn.proj.weight double_blocks.7.txt_attn.proj.bias double_blocks.7.txt_mlp.0.weight double_blocks.7.txt_mlp.0.bias double_blocks.7.txt_mlp.2.weight double_blocks.7.txt_mlp.2.bias double_blocks.8.img_mod.lin.weight double_blocks.8.img_mod.lin.bias double_blocks.8.img_attn.qkv.weight double_blocks.8.img_attn.qkv.bias double_blocks.8.img_attn.norm.query_norm.scale double_blocks.8.img_attn.norm.key_norm.scale double_blocks.8.img_attn.proj.weight double_blocks.8.img_attn.proj.bias double_blocks.8.img_mlp.0.weight double_blocks.8.img_mlp.0.bias double_blocks.8.img_mlp.2.weight double_blocks.8.img_mlp.2.bias double_blocks.8.txt_mod.lin.weight double_blocks.8.txt_mod.lin.bias double_blocks.8.txt_attn.qkv.weight double_blocks.8.txt_attn.qkv.bias double_blocks.8.txt_attn.norm.query_norm.scale double_blocks.8.txt_attn.norm.key_norm.scale double_blocks.8.txt_attn.proj.weight double_blocks.8.txt_attn.proj.bias double_blocks.8.txt_mlp.0.weight double_blocks.8.txt_mlp.0.bias double_blocks.8.txt_mlp.2.weight double_blocks.8.txt_mlp.2.bias double_blocks.9.img_mod.lin.weight double_blocks.9.img_mod.lin.bias double_blocks.9.img_attn.qkv.weight double_blocks.9.img_attn.qkv.bias double_blocks.9.img_attn.norm.query_norm.scale double_blocks.9.img_attn.norm.key_norm.scale double_blocks.9.img_attn.proj.weight double_blocks.9.img_attn.proj.bias double_blocks.9.img_mlp.0.weight double_blocks.9.img_mlp.0.bias double_blocks.9.img_mlp.2.weight double_blocks.9.img_mlp.2.bias double_blocks.9.txt_mod.lin.weight double_blocks.9.txt_mod.lin.bias double_blocks.9.txt_attn.qkv.weight double_blocks.9.txt_attn.qkv.bias double_blocks.9.txt_attn.norm.query_norm.scale double_blocks.9.txt_attn.norm.key_norm.scale double_blocks.9.txt_attn.proj.weight double_blocks.9.txt_attn.proj.bias double_blocks.9.txt_mlp.0.weight double_blocks.9.txt_mlp.0.bias double_blocks.9.txt_mlp.2.weight double_blocks.9.txt_mlp.2.bias double_blocks.10.img_mod.lin.weight double_blocks.10.img_mod.lin.bias double_blocks.10.img_attn.qkv.weight double_blocks.10.img_attn.qkv.bias double_blocks.10.img_attn.norm.query_norm.scale double_blocks.10.img_attn.norm.key_norm.scale double_blocks.10.img_attn.proj.weight double_blocks.10.img_attn.proj.bias double_blocks.10.img_mlp.0.weight double_blocks.10.img_mlp.0.bias double_blocks.10.img_mlp.2.weight double_blocks.10.img_mlp.2.bias double_blocks.10.txt_mod.lin.weight double_blocks.10.txt_mod.lin.bias double_blocks.10.txt_attn.qkv.weight double_blocks.10.txt_attn.qkv.bias double_blocks.10.txt_attn.norm.query_norm.scale double_blocks.10.txt_attn.norm.key_norm.scale double_blocks.10.txt_attn.proj.weight double_blocks.10.txt_attn.proj.bias double_blocks.10.txt_mlp.0.weight double_blocks.10.txt_mlp.0.bias double_blocks.10.txt_mlp.2.weight double_blocks.10.txt_mlp.2.bias double_blocks.11.img_mod.lin.weight double_blocks.11.img_mod.lin.bias double_blocks.11.img_attn.qkv.weight double_blocks.11.img_attn.qkv.bias double_blocks.11.img_attn.norm.query_norm.scale double_blocks.11.img_attn.norm.key_norm.scale double_blocks.11.img_attn.proj.weight double_blocks.11.img_attn.proj.bias double_blocks.11.img_mlp.0.weight double_blocks.11.img_mlp.0.bias double_blocks.11.img_mlp.2.weight double_blocks.11.img_mlp.2.bias double_blocks.11.txt_mod.lin.weight double_blocks.11.txt_mod.lin.bias double_blocks.11.txt_attn.qkv.weight double_blocks.11.txt_attn.qkv.bias double_blocks.11.txt_attn.norm.query_norm.scale double_blocks.11.txt_attn.norm.key_norm.scale double_blocks.11.txt_attn.proj.weight double_blocks.11.txt_attn.proj.bias double_blocks.11.txt_mlp.0.weight double_blocks.11.txt_mlp.0.bias double_blocks.11.txt_mlp.2.weight double_blocks.11.txt_mlp.2.bias double_blocks.12.img_mod.lin.weight double_blocks.12.img_mod.lin.bias double_blocks.12.img_attn.qkv.weight double_blocks.12.img_attn.qkv.bias double_blocks.12.img_attn.norm.query_norm.scale double_blocks.12.img_attn.norm.key_norm.scale double_blocks.12.img_attn.proj.weight double_blocks.12.img_attn.proj.bias double_blocks.12.img_mlp.0.weight double_blocks.12.img_mlp.0.bias double_blocks.12.img_mlp.2.weight double_blocks.12.img_mlp.2.bias double_blocks.12.txt_mod.lin.weight double_blocks.12.txt_mod.lin.bias double_blocks.12.txt_attn.qkv.weight double_blocks.12.txt_attn.qkv.bias double_blocks.12.txt_attn.norm.query_norm.scale double_blocks.12.txt_attn.norm.key_norm.scale double_blocks.12.txt_attn.proj.weight double_blocks.12.txt_attn.proj.bias double_blocks.12.txt_mlp.0.weight double_blocks.12.txt_mlp.0.bias double_blocks.12.txt_mlp.2.weight double_blocks.12.txt_mlp.2.bias double_blocks.13.img_mod.lin.weight double_blocks.13.img_mod.lin.bias double_blocks.13.img_attn.qkv.weight double_blocks.13.img_attn.qkv.bias double_blocks.13.img_attn.norm.query_norm.scale double_blocks.13.img_attn.norm.key_norm.scale double_blocks.13.img_attn.proj.weight double_blocks.13.img_attn.proj.bias double_blocks.13.img_mlp.0.weight double_blocks.13.img_mlp.0.bias double_blocks.13.img_mlp.2.weight double_blocks.13.img_mlp.2.bias double_blocks.13.txt_mod.lin.weight double_blocks.13.txt_mod.lin.bias double_blocks.13.txt_attn.qkv.weight double_blocks.13.txt_attn.qkv.bias double_blocks.13.txt_attn.norm.query_norm.scale double_blocks.13.txt_attn.norm.key_norm.scale double_blocks.13.txt_attn.proj.weight double_blocks.13.txt_attn.proj.bias double_blocks.13.txt_mlp.0.weight double_blocks.13.txt_mlp.0.bias double_blocks.13.txt_mlp.2.weight double_blocks.13.txt_mlp.2.bias double_blocks.14.img_mod.lin.weight double_blocks.14.img_mod.lin.bias double_blocks.14.img_attn.qkv.weight double_blocks.14.img_attn.qkv.bias double_blocks.14.img_attn.norm.query_norm.scale double_blocks.14.img_attn.norm.key_norm.scale double_blocks.14.img_attn.proj.weight double_blocks.14.img_attn.proj.bias double_blocks.14.img_mlp.0.weight double_blocks.14.img_mlp.0.bias double_blocks.14.img_mlp.2.weight double_blocks.14.img_mlp.2.bias double_blocks.14.txt_mod.lin.weight double_blocks.14.txt_mod.lin.bias double_blocks.14.txt_attn.qkv.weight double_blocks.14.txt_attn.qkv.bias double_blocks.14.txt_attn.norm.query_norm.scale double_blocks.14.txt_attn.norm.key_norm.scale double_blocks.14.txt_attn.proj.weight double_blocks.14.txt_attn.proj.bias double_blocks.14.txt_mlp.0.weight double_blocks.14.txt_mlp.0.bias double_blocks.14.txt_mlp.2.weight double_blocks.14.txt_mlp.2.bias double_blocks.15.img_mod.lin.weight double_blocks.15.img_mod.lin.bias double_blocks.15.img_attn.qkv.weight double_blocks.15.img_attn.qkv.bias double_blocks.15.img_attn.norm.query_norm.scale double_blocks.15.img_attn.norm.key_norm.scale double_blocks.15.img_attn.proj.weight double_blocks.15.img_attn.proj.bias double_blocks.15.img_mlp.0.weight double_blocks.15.img_mlp.0.bias double_blocks.15.img_mlp.2.weight double_blocks.15.img_mlp.2.bias double_blocks.15.txt_mod.lin.weight double_blocks.15.txt_mod.lin.bias double_blocks.15.txt_attn.qkv.weight double_blocks.15.txt_attn.qkv.bias double_blocks.15.txt_attn.norm.query_norm.scale double_blocks.15.txt_attn.norm.key_norm.scale double_blocks.15.txt_attn.proj.weight double_blocks.15.txt_attn.proj.bias double_blocks.15.txt_mlp.0.weight double_blocks.15.txt_mlp.0.bias double_blocks.15.txt_mlp.2.weight double_blocks.15.txt_mlp.2.bias double_blocks.16.img_mod.lin.weight double_blocks.16.img_mod.lin.bias double_blocks.16.img_attn.qkv.weight double_blocks.16.img_attn.qkv.bias double_blocks.16.img_attn.norm.query_norm.scale double_blocks.16.img_attn.norm.key_norm.scale double_blocks.16.img_attn.proj.weight double_blocks.16.img_attn.proj.bias double_blocks.16.img_mlp.0.weight double_blocks.16.img_mlp.0.bias double_blocks.16.img_mlp.2.weight double_blocks.16.img_mlp.2.bias double_blocks.16.txt_mod.lin.weight double_blocks.16.txt_mod.lin.bias double_blocks.16.txt_attn.qkv.weight double_blocks.16.txt_attn.qkv.bias double_blocks.16.txt_attn.norm.query_norm.scale double_blocks.16.txt_attn.norm.key_norm.scale double_blocks.16.txt_attn.proj.weight double_blocks.16.txt_attn.proj.bias double_blocks.16.txt_mlp.0.weight double_blocks.16.txt_mlp.0.bias double_blocks.16.txt_mlp.2.weight double_blocks.16.txt_mlp.2.bias double_blocks.17.img_mod.lin.weight double_blocks.17.img_mod.lin.bias double_blocks.17.img_attn.qkv.weight double_blocks.17.img_attn.qkv.bias double_blocks.17.img_attn.norm.query_norm.scale double_blocks.17.img_attn.norm.key_norm.scale double_blocks.17.img_attn.proj.weight double_blocks.17.img_attn.proj.bias double_blocks.17.img_mlp.0.weight double_blocks.17.img_mlp.0.bias double_blocks.17.img_mlp.2.weight double_blocks.17.img_mlp.2.bias double_blocks.17.txt_mod.lin.weight double_blocks.17.txt_mod.lin.bias double_blocks.17.txt_attn.qkv.weight double_blocks.17.txt_attn.qkv.bias double_blocks.17.txt_attn.norm.query_norm.scale double_blocks.17.txt_attn.norm.key_norm.scale double_blocks.17.txt_attn.proj.weight double_blocks.17.txt_attn.proj.bias double_blocks.17.txt_mlp.0.weight double_blocks.17.txt_mlp.0.bias double_blocks.17.txt_mlp.2.weight double_blocks.17.txt_mlp.2.bias double_blocks.18.img_mod.lin.weight double_blocks.18.img_mod.lin.bias double_blocks.18.img_attn.qkv.weight double_blocks.18.img_attn.qkv.bias double_blocks.18.img_attn.norm.query_norm.scale double_blocks.18.img_attn.norm.key_norm.scale double_blocks.18.img_attn.proj.weight double_blocks.18.img_attn.proj.bias double_blocks.18.img_mlp.0.weight double_blocks.18.img_mlp.0.bias double_blocks.18.img_mlp.2.weight double_blocks.18.img_mlp.2.bias double_blocks.18.txt_mod.lin.weight double_blocks.18.txt_mod.lin.bias double_blocks.18.txt_attn.qkv.weight double_blocks.18.txt_attn.qkv.bias double_blocks.18.txt_attn.norm.query_norm.scale double_blocks.18.txt_attn.norm.key_norm.scale double_blocks.18.txt_attn.proj.weight double_blocks.18.txt_attn.proj.bias double_blocks.18.txt_mlp.0.weight double_blocks.18.txt_mlp.0.bias double_blocks.18.txt_mlp.2.weight double_blocks.18.txt_mlp.2.bias single_blocks.0.linear1.weight single_blocks.0.linear1.bias single_blocks.0.linear2.weight single_blocks.0.linear2.bias single_blocks.0.norm.query_norm.scale single_blocks.0.norm.key_norm.scale single_blocks.0.modulation.lin.weight single_blocks.0.modulation.lin.bias single_blocks.1.linear1.weight single_blocks.1.linear1.bias single_blocks.1.linear2.weight single_blocks.1.linear2.bias single_blocks.1.norm.query_norm.scale single_blocks.1.norm.key_norm.scale single_blocks.1.modulation.lin.weight single_blocks.1.modulation.lin.bias single_blocks.2.linear1.weight single_blocks.2.linear1.bias single_blocks.2.linear2.weight single_blocks.2.linear2.bias single_blocks.2.norm.query_norm.scale single_blocks.2.norm.key_norm.scale single_blocks.2.modulation.lin.weight single_blocks.2.modulation.lin.bias single_blocks.3.linear1.weight single_blocks.3.linear1.bias single_blocks.3.linear2.weight single_blocks.3.linear2.bias single_blocks.3.norm.query_norm.scale single_blocks.3.norm.key_norm.scale single_blocks.3.modulation.lin.weight single_blocks.3.modulation.lin.bias single_blocks.4.linear1.weight single_blocks.4.linear1.bias single_blocks.4.linear2.weight single_blocks.4.linear2.bias single_blocks.4.norm.query_norm.scale single_blocks.4.norm.key_norm.scale single_blocks.4.modulation.lin.weight single_blocks.4.modulation.lin.bias single_blocks.5.linear1.weight single_blocks.5.linear1.bias single_blocks.5.linear2.weight single_blocks.5.linear2.bias single_blocks.5.norm.query_norm.scale single_blocks.5.norm.key_norm.scale single_blocks.5.modulation.lin.weight single_blocks.5.modulation.lin.bias single_blocks.6.linear1.weight single_blocks.6.linear1.bias single_blocks.6.linear2.weight single_blocks.6.linear2.bias single_blocks.6.norm.query_norm.scale single_blocks.6.norm.key_norm.scale single_blocks.6.modulation.lin.weight single_blocks.6.modulation.lin.bias single_blocks.7.linear1.weight single_blocks.7.linear1.bias single_blocks.7.linear2.weight single_blocks.7.linear2.bias single_blocks.7.norm.query_norm.scale single_blocks.7.norm.key_norm.scale single_blocks.7.modulation.lin.weight single_blocks.7.modulation.lin.bias single_blocks.8.linear1.weight single_blocks.8.linear1.bias single_blocks.8.linear2.weight single_blocks.8.linear2.bias single_blocks.8.norm.query_norm.scale single_blocks.8.norm.key_norm.scale single_blocks.8.modulation.lin.weight single_blocks.8.modulation.lin.bias single_blocks.9.linear1.weight single_blocks.9.linear1.bias single_blocks.9.linear2.weight single_blocks.9.linear2.bias single_blocks.9.norm.query_norm.scale single_blocks.9.norm.key_norm.scale single_blocks.9.modulation.lin.weight single_blocks.9.modulation.lin.bias single_blocks.10.linear1.weight single_blocks.10.linear1.bias single_blocks.10.linear2.weight single_blocks.10.linear2.bias single_blocks.10.norm.query_norm.scale single_blocks.10.norm.key_norm.scale single_blocks.10.modulation.lin.weight single_blocks.10.modulation.lin.bias single_blocks.11.linear1.weight single_blocks.11.linear1.bias single_blocks.11.linear2.weight single_blocks.11.linear2.bias single_blocks.11.norm.query_norm.scale single_blocks.11.norm.key_norm.scale single_blocks.11.modulation.lin.weight single_blocks.11.modulation.lin.bias single_blocks.12.linear1.weight single_blocks.12.linear1.bias single_blocks.12.linear2.weight single_blocks.12.linear2.bias single_blocks.12.norm.query_norm.scale single_blocks.12.norm.key_norm.scale single_blocks.12.modulation.lin.weight single_blocks.12.modulation.lin.bias single_blocks.13.linear1.weight single_blocks.13.linear1.bias single_blocks.13.linear2.weight single_blocks.13.linear2.bias single_blocks.13.norm.query_norm.scale single_blocks.13.norm.key_norm.scale single_blocks.13.modulation.lin.weight single_blocks.13.modulation.lin.bias single_blocks.14.linear1.weight single_blocks.14.linear1.bias single_blocks.14.linear2.weight single_blocks.14.linear2.bias single_blocks.14.norm.query_norm.scale single_blocks.14.norm.key_norm.scale single_blocks.14.modulation.lin.weight single_blocks.14.modulation.lin.bias single_blocks.15.linear1.weight single_blocks.15.linear1.bias single_blocks.15.linear2.weight single_blocks.15.linear2.bias single_blocks.15.norm.query_norm.scale single_blocks.15.norm.key_norm.scale single_blocks.15.modulation.lin.weight single_blocks.15.modulation.lin.bias single_blocks.16.linear1.weight single_blocks.16.linear1.bias single_blocks.16.linear2.weight single_blocks.16.linear2.bias single_blocks.16.norm.query_norm.scale single_blocks.16.norm.key_norm.scale single_blocks.16.modulation.lin.weight single_blocks.16.modulation.lin.bias single_blocks.17.linear1.weight single_blocks.17.linear1.bias single_blocks.17.linear2.weight single_blocks.17.linear2.bias single_blocks.17.norm.query_norm.scale single_blocks.17.norm.key_norm.scale single_blocks.17.modulation.lin.weight single_blocks.17.modulation.lin.bias single_blocks.18.linear1.weight single_blocks.18.linear1.bias single_blocks.18.linear2.weight single_blocks.18.linear2.bias single_blocks.18.norm.query_norm.scale single_blocks.18.norm.key_norm.scale single_blocks.18.modulation.lin.weight single_blocks.18.modulation.lin.bias single_blocks.19.linear1.weight single_blocks.19.linear1.bias single_blocks.19.linear2.weight single_blocks.19.linear2.bias single_blocks.19.norm.query_norm.scale single_blocks.19.norm.key_norm.scale single_blocks.19.modulation.lin.weight single_blocks.19.modulation.lin.bias single_blocks.20.linear1.weight single_blocks.20.linear1.bias single_blocks.20.linear2.weight single_blocks.20.linear2.bias single_blocks.20.norm.query_norm.scale single_blocks.20.norm.key_norm.scale single_blocks.20.modulation.lin.weight single_blocks.20.modulation.lin.bias single_blocks.21.linear1.weight single_blocks.21.linear1.bias single_blocks.21.linear2.weight single_blocks.21.linear2.bias single_blocks.21.norm.query_norm.scale single_blocks.21.norm.key_norm.scale single_blocks.21.modulation.lin.weight single_blocks.21.modulation.lin.bias single_blocks.22.linear1.weight single_blocks.22.linear1.bias single_blocks.22.linear2.weight single_blocks.22.linear2.bias single_blocks.22.norm.query_norm.scale single_blocks.22.norm.key_norm.scale single_blocks.22.modulation.lin.weight single_blocks.22.modulation.lin.bias single_blocks.23.linear1.weight single_blocks.23.linear1.bias single_blocks.23.linear2.weight single_blocks.23.linear2.bias single_blocks.23.norm.query_norm.scale single_blocks.23.norm.key_norm.scale single_blocks.23.modulation.lin.weight single_blocks.23.modulation.lin.bias single_blocks.24.linear1.weight single_blocks.24.linear1.bias single_blocks.24.linear2.weight single_blocks.24.linear2.bias single_blocks.24.norm.query_norm.scale single_blocks.24.norm.key_norm.scale single_blocks.24.modulation.lin.weight single_blocks.24.modulation.lin.bias single_blocks.25.linear1.weight single_blocks.25.linear1.bias single_blocks.25.linear2.weight single_blocks.25.linear2.bias single_blocks.25.norm.query_norm.scale single_blocks.25.norm.key_norm.scale single_blocks.25.modulation.lin.weight single_blocks.25.modulation.lin.bias single_blocks.26.linear1.weight single_blocks.26.linear1.bias single_blocks.26.linear2.weight single_blocks.26.linear2.bias single_blocks.26.norm.query_norm.scale single_blocks.26.norm.key_norm.scale single_blocks.26.modulation.lin.weight single_blocks.26.modulation.lin.bias single_blocks.27.linear1.weight single_blocks.27.linear1.bias single_blocks.27.linear2.weight single_blocks.27.linear2.bias single_blocks.27.norm.query_norm.scale single_blocks.27.norm.key_norm.scale single_blocks.27.modulation.lin.weight single_blocks.27.modulation.lin.bias single_blocks.28.linear1.weight single_blocks.28.linear1.bias single_blocks.28.linear2.weight single_blocks.28.linear2.bias single_blocks.28.norm.query_norm.scale single_blocks.28.norm.key_norm.scale single_blocks.28.modulation.lin.weight single_blocks.28.modulation.lin.bias single_blocks.29.linear1.weight single_blocks.29.linear1.bias single_blocks.29.linear2.weight single_blocks.29.linear2.bias single_blocks.29.norm.query_norm.scale single_blocks.29.norm.key_norm.scale single_blocks.29.modulation.lin.weight single_blocks.29.modulation.lin.bias single_blocks.30.linear1.weight single_blocks.30.linear1.bias single_blocks.30.linear2.weight single_blocks.30.linear2.bias single_blocks.30.norm.query_norm.scale single_blocks.30.norm.key_norm.scale single_blocks.30.modulation.lin.weight single_blocks.30.modulation.lin.bias single_blocks.31.linear1.weight single_blocks.31.linear1.bias single_blocks.31.linear2.weight single_blocks.31.linear2.bias single_blocks.31.norm.query_norm.scale single_blocks.31.norm.key_norm.scale single_blocks.31.modulation.lin.weight single_blocks.31.modulation.lin.bias single_blocks.32.linear1.weight single_blocks.32.linear1.bias single_blocks.32.linear2.weight single_blocks.32.linear2.bias single_blocks.32.norm.query_norm.scale single_blocks.32.norm.key_norm.scale single_blocks.32.modulation.lin.weight single_blocks.32.modulation.lin.bias single_blocks.33.linear1.weight single_blocks.33.linear1.bias single_blocks.33.linear2.weight single_blocks.33.linear2.bias single_blocks.33.norm.query_norm.scale single_blocks.33.norm.key_norm.scale single_blocks.33.modulation.lin.weight single_blocks.33.modulation.lin.bias single_blocks.34.linear1.weight single_blocks.34.linear1.bias single_blocks.34.linear2.weight single_blocks.34.linear2.bias single_blocks.34.norm.query_norm.scale single_blocks.34.norm.key_norm.scale single_blocks.34.modulation.lin.weight single_blocks.34.modulation.lin.bias single_blocks.35.linear1.weight single_blocks.35.linear1.bias single_blocks.35.linear2.weight single_blocks.35.linear2.bias single_blocks.35.norm.query_norm.scale single_blocks.35.norm.key_norm.scale single_blocks.35.modulation.lin.weight single_blocks.35.modulation.lin.bias single_blocks.36.linear1.weight single_blocks.36.linear1.bias single_blocks.36.linear2.weight single_blocks.36.linear2.bias single_blocks.36.norm.query_norm.scale single_blocks.36.norm.key_norm.scale single_blocks.36.modulation.lin.weight single_blocks.36.modulation.lin.bias single_blocks.37.linear1.weight single_blocks.37.linear1.bias single_blocks.37.linear2.weight single_blocks.37.linear2.bias single_blocks.37.norm.query_norm.scale single_blocks.37.norm.key_norm.scale single_blocks.37.modulation.lin.weight single_blocks.37.modulation.lin.bias final_layer.linear.weight final_layer.linear.bias final_layer.adaLN_modulation.1.weight final_layer.adaLN_modulation.1.bias


Got 1442 unexpected keys: model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.0.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.0.img_attn.proj.bias model.diffusion_model.double_blocks.0.img_attn.proj.weight model.diffusion_model.double_blocks.0.img_attn.qkv.bias model.diffusion_model.double_blocks.0.img_attn.qkv.weight model.diffusion_model.double_blocks.0.img_mlp.0.bias model.diffusion_model.double_blocks.0.img_mlp.0.weight model.diffusion_model.double_blocks.0.img_mlp.2.bias model.diffusion_model.double_blocks.0.img_mlp.2.weight model.diffusion_model.double_blocks.0.img_mod.lin.bias model.diffusion_model.double_blocks.0.img_mod.lin.weight model.diffusion_model.double_blocks.0.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.0.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.0.txt_attn.proj.bias model.diffusion_model.double_blocks.0.txt_attn.proj.weight model.diffusion_model.double_blocks.0.txt_attn.qkv.bias model.diffusion_model.double_blocks.0.txt_attn.qkv.weight model.diffusion_model.double_blocks.0.txt_mlp.0.bias model.diffusion_model.double_blocks.0.txt_mlp.0.weight model.diffusion_model.double_blocks.0.txt_mlp.2.bias model.diffusion_model.double_blocks.0.txt_mlp.2.weight model.diffusion_model.double_blocks.0.txt_mod.lin.bias model.diffusion_model.double_blocks.0.txt_mod.lin.weight model.diffusion_model.double_blocks.1.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.1.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.1.img_attn.proj.bias model.diffusion_model.double_blocks.1.img_attn.proj.weight model.diffusion_model.double_blocks.1.img_attn.qkv.bias model.diffusion_model.double_blocks.1.img_attn.qkv.weight model.diffusion_model.double_blocks.1.img_mlp.0.bias model.diffusion_model.double_blocks.1.img_mlp.0.weight model.diffusion_model.double_blocks.1.img_mlp.2.bias model.diffusion_model.double_blocks.1.img_mlp.2.weight model.diffusion_model.double_blocks.1.img_mod.lin.bias model.diffusion_model.double_blocks.1.img_mod.lin.weight model.diffusion_model.double_blocks.1.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.1.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.1.txt_attn.proj.bias model.diffusion_model.double_blocks.1.txt_attn.proj.weight model.diffusion_model.double_blocks.1.txt_attn.qkv.bias model.diffusion_model.double_blocks.1.txt_attn.qkv.weight model.diffusion_model.double_blocks.1.txt_mlp.0.bias model.diffusion_model.double_blocks.1.txt_mlp.0.weight model.diffusion_model.double_blocks.1.txt_mlp.2.bias model.diffusion_model.double_blocks.1.txt_mlp.2.weight model.diffusion_model.double_blocks.1.txt_mod.lin.bias model.diffusion_model.double_blocks.1.txt_mod.lin.weight model.diffusion_model.double_blocks.10.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.10.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.10.img_attn.proj.bias model.diffusion_model.double_blocks.10.img_attn.proj.weight model.diffusion_model.double_blocks.10.img_attn.qkv.bias model.diffusion_model.double_blocks.10.img_attn.qkv.weight model.diffusion_model.double_blocks.10.img_mlp.0.bias model.diffusion_model.double_blocks.10.img_mlp.0.weight model.diffusion_model.double_blocks.10.img_mlp.2.bias model.diffusion_model.double_blocks.10.img_mlp.2.weight model.diffusion_model.double_blocks.10.img_mod.lin.bias model.diffusion_model.double_blocks.10.img_mod.lin.weight model.diffusion_model.double_blocks.10.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.10.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.10.txt_attn.proj.bias model.diffusion_model.double_blocks.10.txt_attn.proj.weight model.diffusion_model.double_blocks.10.txt_attn.qkv.bias model.diffusion_model.double_blocks.10.txt_attn.qkv.weight model.diffusion_model.double_blocks.10.txt_mlp.0.bias model.diffusion_model.double_blocks.10.txt_mlp.0.weight model.diffusion_model.double_blocks.10.txt_mlp.2.bias model.diffusion_model.double_blocks.10.txt_mlp.2.weight model.diffusion_model.double_blocks.10.txt_mod.lin.bias model.diffusion_model.double_blocks.10.txt_mod.lin.weight model.diffusion_model.double_blocks.11.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.11.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.11.img_attn.proj.bias model.diffusion_model.double_blocks.11.img_attn.proj.weight model.diffusion_model.double_blocks.11.img_attn.qkv.bias model.diffusion_model.double_blocks.11.img_attn.qkv.weight model.diffusion_model.double_blocks.11.img_mlp.0.bias model.diffusion_model.double_blocks.11.img_mlp.0.weight model.diffusion_model.double_blocks.11.img_mlp.2.bias model.diffusion_model.double_blocks.11.img_mlp.2.weight model.diffusion_model.double_blocks.11.img_mod.lin.bias model.diffusion_model.double_blocks.11.img_mod.lin.weight model.diffusion_model.double_blocks.11.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.11.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.11.txt_attn.proj.bias model.diffusion_model.double_blocks.11.txt_attn.proj.weight model.diffusion_model.double_blocks.11.txt_attn.qkv.bias model.diffusion_model.double_blocks.11.txt_attn.qkv.weight model.diffusion_model.double_blocks.11.txt_mlp.0.bias model.diffusion_model.double_blocks.11.txt_mlp.0.weight model.diffusion_model.double_blocks.11.txt_mlp.2.bias model.diffusion_model.double_blocks.11.txt_mlp.2.weight model.diffusion_model.double_blocks.11.txt_mod.lin.bias model.diffusion_model.double_blocks.11.txt_mod.lin.weight model.diffusion_model.double_blocks.12.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.12.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.12.img_attn.proj.bias model.diffusion_model.double_blocks.12.img_attn.proj.weight model.diffusion_model.double_blocks.12.img_attn.qkv.bias model.diffusion_model.double_blocks.12.img_attn.qkv.weight model.diffusion_model.double_blocks.12.img_mlp.0.bias model.diffusion_model.double_blocks.12.img_mlp.0.weight model.diffusion_model.double_blocks.12.img_mlp.2.bias model.diffusion_model.double_blocks.12.img_mlp.2.weight model.diffusion_model.double_blocks.12.img_mod.lin.bias model.diffusion_model.double_blocks.12.img_mod.lin.weight model.diffusion_model.double_blocks.12.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.12.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.12.txt_attn.proj.bias model.diffusion_model.double_blocks.12.txt_attn.proj.weight model.diffusion_model.double_blocks.12.txt_attn.qkv.bias model.diffusion_model.double_blocks.12.txt_attn.qkv.weight model.diffusion_model.double_blocks.12.txt_mlp.0.bias model.diffusion_model.double_blocks.12.txt_mlp.0.weight model.diffusion_model.double_blocks.12.txt_mlp.2.bias model.diffusion_model.double_blocks.12.txt_mlp.2.weight model.diffusion_model.double_blocks.12.txt_mod.lin.bias model.diffusion_model.double_blocks.12.txt_mod.lin.weight model.diffusion_model.double_blocks.13.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.13.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.13.img_attn.proj.bias model.diffusion_model.double_blocks.13.img_attn.proj.weight model.diffusion_model.double_blocks.13.img_attn.qkv.bias model.diffusion_model.double_blocks.13.img_attn.qkv.weight model.diffusion_model.double_blocks.13.img_mlp.0.bias model.diffusion_model.double_blocks.13.img_mlp.0.weight model.diffusion_model.double_blocks.13.img_mlp.2.bias model.diffusion_model.double_blocks.13.img_mlp.2.weight model.diffusion_model.double_blocks.13.img_mod.lin.bias model.diffusion_model.double_blocks.13.img_mod.lin.weight model.diffusion_model.double_blocks.13.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.13.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.13.txt_attn.proj.bias model.diffusion_model.double_blocks.13.txt_attn.proj.weight model.diffusion_model.double_blocks.13.txt_attn.qkv.bias model.diffusion_model.double_blocks.13.txt_attn.qkv.weight model.diffusion_model.double_blocks.13.txt_mlp.0.bias model.diffusion_model.double_blocks.13.txt_mlp.0.weight model.diffusion_model.double_blocks.13.txt_mlp.2.bias model.diffusion_model.double_blocks.13.txt_mlp.2.weight model.diffusion_model.double_blocks.13.txt_mod.lin.bias model.diffusion_model.double_blocks.13.txt_mod.lin.weight model.diffusion_model.double_blocks.14.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.14.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.14.img_attn.proj.bias model.diffusion_model.double_blocks.14.img_attn.proj.weight model.diffusion_model.double_blocks.14.img_attn.qkv.bias model.diffusion_model.double_blocks.14.img_attn.qkv.weight model.diffusion_model.double_blocks.14.img_mlp.0.bias model.diffusion_model.double_blocks.14.img_mlp.0.weight model.diffusion_model.double_blocks.14.img_mlp.2.bias model.diffusion_model.double_blocks.14.img_mlp.2.weight model.diffusion_model.double_blocks.14.img_mod.lin.bias model.diffusion_model.double_blocks.14.img_mod.lin.weight model.diffusion_model.double_blocks.14.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.14.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.14.txt_attn.proj.bias model.diffusion_model.double_blocks.14.txt_attn.proj.weight model.diffusion_model.double_blocks.14.txt_attn.qkv.bias model.diffusion_model.double_blocks.14.txt_attn.qkv.weight model.diffusion_model.double_blocks.14.txt_mlp.0.bias model.diffusion_model.double_blocks.14.txt_mlp.0.weight model.diffusion_model.double_blocks.14.txt_mlp.2.bias model.diffusion_model.double_blocks.14.txt_mlp.2.weight model.diffusion_model.double_blocks.14.txt_mod.lin.bias model.diffusion_model.double_blocks.14.txt_mod.lin.weight model.diffusion_model.double_blocks.15.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.15.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.15.img_attn.proj.bias model.diffusion_model.double_blocks.15.img_attn.proj.weight model.diffusion_model.double_blocks.15.img_attn.qkv.bias model.diffusion_model.double_blocks.15.img_attn.qkv.weight model.diffusion_model.double_blocks.15.img_mlp.0.bias model.diffusion_model.double_blocks.15.img_mlp.0.weight model.diffusion_model.double_blocks.15.img_mlp.2.bias model.diffusion_model.double_blocks.15.img_mlp.2.weight model.diffusion_model.double_blocks.15.img_mod.lin.bias model.diffusion_model.double_blocks.15.img_mod.lin.weight model.diffusion_model.double_blocks.15.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.15.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.15.txt_attn.proj.bias model.diffusion_model.double_blocks.15.txt_attn.proj.weight model.diffusion_model.double_blocks.15.txt_attn.qkv.bias model.diffusion_model.double_blocks.15.txt_attn.qkv.weight model.diffusion_model.double_blocks.15.txt_mlp.0.bias model.diffusion_model.double_blocks.15.txt_mlp.0.weight model.diffusion_model.double_blocks.15.txt_mlp.2.bias model.diffusion_model.double_blocks.15.txt_mlp.2.weight model.diffusion_model.double_blocks.15.txt_mod.lin.bias model.diffusion_model.double_blocks.15.txt_mod.lin.weight model.diffusion_model.double_blocks.16.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.16.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.16.img_attn.proj.bias model.diffusion_model.double_blocks.16.img_attn.proj.weight model.diffusion_model.double_blocks.16.img_attn.qkv.bias model.diffusion_model.double_blocks.16.img_attn.qkv.weight model.diffusion_model.double_blocks.16.img_mlp.0.bias model.diffusion_model.double_blocks.16.img_mlp.0.weight model.diffusion_model.double_blocks.16.img_mlp.2.bias model.diffusion_model.double_blocks.16.img_mlp.2.weight model.diffusion_model.double_blocks.16.img_mod.lin.bias model.diffusion_model.double_blocks.16.img_mod.lin.weight model.diffusion_model.double_blocks.16.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.16.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.16.txt_attn.proj.bias model.diffusion_model.double_blocks.16.txt_attn.proj.weight model.diffusion_model.double_blocks.16.txt_attn.qkv.bias model.diffusion_model.double_blocks.16.txt_attn.qkv.weight model.diffusion_model.double_blocks.16.txt_mlp.0.bias model.diffusion_model.double_blocks.16.txt_mlp.0.weight model.diffusion_model.double_blocks.16.txt_mlp.2.bias model.diffusion_model.double_blocks.16.txt_mlp.2.weight model.diffusion_model.double_blocks.16.txt_mod.lin.bias model.diffusion_model.double_blocks.16.txt_mod.lin.weight model.diffusion_model.double_blocks.17.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.17.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.17.img_attn.proj.bias model.diffusion_model.double_blocks.17.img_attn.proj.weight model.diffusion_model.double_blocks.17.img_attn.qkv.bias model.diffusion_model.double_blocks.17.img_attn.qkv.weight model.diffusion_model.double_blocks.17.img_mlp.0.bias model.diffusion_model.double_blocks.17.img_mlp.0.weight model.diffusion_model.double_blocks.17.img_mlp.2.bias model.diffusion_model.double_blocks.17.img_mlp.2.weight model.diffusion_model.double_blocks.17.img_mod.lin.bias model.diffusion_model.double_blocks.17.img_mod.lin.weight model.diffusion_model.double_blocks.17.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.17.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.17.txt_attn.proj.bias model.diffusion_model.double_blocks.17.txt_attn.proj.weight model.diffusion_model.double_blocks.17.txt_attn.qkv.bias model.diffusion_model.double_blocks.17.txt_attn.qkv.weight model.diffusion_model.double_blocks.17.txt_mlp.0.bias model.diffusion_model.double_blocks.17.txt_mlp.0.weight model.diffusion_model.double_blocks.17.txt_mlp.2.bias model.diffusion_model.double_blocks.17.txt_mlp.2.weight model.diffusion_model.double_blocks.17.txt_mod.lin.bias model.diffusion_model.double_blocks.17.txt_mod.lin.weight model.diffusion_model.double_blocks.18.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.18.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.18.img_attn.proj.bias model.diffusion_model.double_blocks.18.img_attn.proj.weight model.diffusion_model.double_blocks.18.img_attn.qkv.bias model.diffusion_model.double_blocks.18.img_attn.qkv.weight model.diffusion_model.double_blocks.18.img_mlp.0.bias model.diffusion_model.double_blocks.18.img_mlp.0.weight model.diffusion_model.double_blocks.18.img_mlp.2.bias model.diffusion_model.double_blocks.18.img_mlp.2.weight model.diffusion_model.double_blocks.18.img_mod.lin.bias model.diffusion_model.double_blocks.18.img_mod.lin.weight model.diffusion_model.double_blocks.18.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.18.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.18.txt_attn.proj.bias model.diffusion_model.double_blocks.18.txt_attn.proj.weight model.diffusion_model.double_blocks.18.txt_attn.qkv.bias model.diffusion_model.double_blocks.18.txt_attn.qkv.weight model.diffusion_model.double_blocks.18.txt_mlp.0.bias model.diffusion_model.double_blocks.18.txt_mlp.0.weight model.diffusion_model.double_blocks.18.txt_mlp.2.bias model.diffusion_model.double_blocks.18.txt_mlp.2.weight model.diffusion_model.double_blocks.18.txt_mod.lin.bias model.diffusion_model.double_blocks.18.txt_mod.lin.weight model.diffusion_model.double_blocks.2.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.2.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.2.img_attn.proj.bias model.diffusion_model.double_blocks.2.img_attn.proj.weight model.diffusion_model.double_blocks.2.img_attn.qkv.bias model.diffusion_model.double_blocks.2.img_attn.qkv.weight model.diffusion_model.double_blocks.2.img_mlp.0.bias model.diffusion_model.double_blocks.2.img_mlp.0.weight model.diffusion_model.double_blocks.2.img_mlp.2.bias model.diffusion_model.double_blocks.2.img_mlp.2.weight model.diffusion_model.double_blocks.2.img_mod.lin.bias model.diffusion_model.double_blocks.2.img_mod.lin.weight model.diffusion_model.double_blocks.2.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.2.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.2.txt_attn.proj.bias model.diffusion_model.double_blocks.2.txt_attn.proj.weight model.diffusion_model.double_blocks.2.txt_attn.qkv.bias model.diffusion_model.double_blocks.2.txt_attn.qkv.weight model.diffusion_model.double_blocks.2.txt_mlp.0.bias model.diffusion_model.double_blocks.2.txt_mlp.0.weight model.diffusion_model.double_blocks.2.txt_mlp.2.bias model.diffusion_model.double_blocks.2.txt_mlp.2.weight model.diffusion_model.double_blocks.2.txt_mod.lin.bias model.diffusion_model.double_blocks.2.txt_mod.lin.weight model.diffusion_model.double_blocks.3.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.3.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.3.img_attn.proj.bias model.diffusion_model.double_blocks.3.img_attn.proj.weight model.diffusion_model.double_blocks.3.img_attn.qkv.bias model.diffusion_model.double_blocks.3.img_attn.qkv.weight model.diffusion_model.double_blocks.3.img_mlp.0.bias model.diffusion_model.double_blocks.3.img_mlp.0.weight model.diffusion_model.double_blocks.3.img_mlp.2.bias model.diffusion_model.double_blocks.3.img_mlp.2.weight model.diffusion_model.double_blocks.3.img_mod.lin.bias model.diffusion_model.double_blocks.3.img_mod.lin.weight model.diffusion_model.double_blocks.3.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.3.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.3.txt_attn.proj.bias model.diffusion_model.double_blocks.3.txt_attn.proj.weight model.diffusion_model.double_blocks.3.txt_attn.qkv.bias model.diffusion_model.double_blocks.3.txt_attn.qkv.weight model.diffusion_model.double_blocks.3.txt_mlp.0.bias model.diffusion_model.double_blocks.3.txt_mlp.0.weight model.diffusion_model.double_blocks.3.txt_mlp.2.bias model.diffusion_model.double_blocks.3.txt_mlp.2.weight model.diffusion_model.double_blocks.3.txt_mod.lin.bias model.diffusion_model.double_blocks.3.txt_mod.lin.weight model.diffusion_model.double_blocks.4.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.4.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.4.img_attn.proj.bias model.diffusion_model.double_blocks.4.img_attn.proj.weight model.diffusion_model.double_blocks.4.img_attn.qkv.bias model.diffusion_model.double_blocks.4.img_attn.qkv.weight model.diffusion_model.double_blocks.4.img_mlp.0.bias model.diffusion_model.double_blocks.4.img_mlp.0.weight model.diffusion_model.double_blocks.4.img_mlp.2.bias model.diffusion_model.double_blocks.4.img_mlp.2.weight model.diffusion_model.double_blocks.4.img_mod.lin.bias model.diffusion_model.double_blocks.4.img_mod.lin.weight model.diffusion_model.double_blocks.4.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.4.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.4.txt_attn.proj.bias model.diffusion_model.double_blocks.4.txt_attn.proj.weight model.diffusion_model.double_blocks.4.txt_attn.qkv.bias model.diffusion_model.double_blocks.4.txt_attn.qkv.weight model.diffusion_model.double_blocks.4.txt_mlp.0.bias model.diffusion_model.double_blocks.4.txt_mlp.0.weight model.diffusion_model.double_blocks.4.txt_mlp.2.bias model.diffusion_model.double_blocks.4.txt_mlp.2.weight model.diffusion_model.double_blocks.4.txt_mod.lin.bias model.diffusion_model.double_blocks.4.txt_mod.lin.weight model.diffusion_model.double_blocks.5.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.5.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.5.img_attn.proj.bias model.diffusion_model.double_blocks.5.img_attn.proj.weight model.diffusion_model.double_blocks.5.img_attn.qkv.bias model.diffusion_model.double_blocks.5.img_attn.qkv.weight model.diffusion_model.double_blocks.5.img_mlp.0.bias model.diffusion_model.double_blocks.5.img_mlp.0.weight model.diffusion_model.double_blocks.5.img_mlp.2.bias model.diffusion_model.double_blocks.5.img_mlp.2.weight model.diffusion_model.double_blocks.5.img_mod.lin.bias model.diffusion_model.double_blocks.5.img_mod.lin.weight model.diffusion_model.double_blocks.5.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.5.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.5.txt_attn.proj.bias model.diffusion_model.double_blocks.5.txt_attn.proj.weight model.diffusion_model.double_blocks.5.txt_attn.qkv.bias model.diffusion_model.double_blocks.5.txt_attn.qkv.weight model.diffusion_model.double_blocks.5.txt_mlp.0.bias model.diffusion_model.double_blocks.5.txt_mlp.0.weight model.diffusion_model.double_blocks.5.txt_mlp.2.bias model.diffusion_model.double_blocks.5.txt_mlp.2.weight model.diffusion_model.double_blocks.5.txt_mod.lin.bias model.diffusion_model.double_blocks.5.txt_mod.lin.weight model.diffusion_model.double_blocks.6.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.6.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.6.img_attn.proj.bias model.diffusion_model.double_blocks.6.img_attn.proj.weight model.diffusion_model.double_blocks.6.img_attn.qkv.bias model.diffusion_model.double_blocks.6.img_attn.qkv.weight model.diffusion_model.double_blocks.6.img_mlp.0.bias model.diffusion_model.double_blocks.6.img_mlp.0.weight model.diffusion_model.double_blocks.6.img_mlp.2.bias model.diffusion_model.double_blocks.6.img_mlp.2.weight model.diffusion_model.double_blocks.6.img_mod.lin.bias model.diffusion_model.double_blocks.6.img_mod.lin.weight model.diffusion_model.double_blocks.6.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.6.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.6.txt_attn.proj.bias model.diffusion_model.double_blocks.6.txt_attn.proj.weight model.diffusion_model.double_blocks.6.txt_attn.qkv.bias model.diffusion_model.double_blocks.6.txt_attn.qkv.weight model.diffusion_model.double_blocks.6.txt_mlp.0.bias model.diffusion_model.double_blocks.6.txt_mlp.0.weight model.diffusion_model.double_blocks.6.txt_mlp.2.bias model.diffusion_model.double_blocks.6.txt_mlp.2.weight model.diffusion_model.double_blocks.6.txt_mod.lin.bias model.diffusion_model.double_blocks.6.txt_mod.lin.weight model.diffusion_model.double_blocks.7.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.7.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.7.img_attn.proj.bias model.diffusion_model.double_blocks.7.img_attn.proj.weight model.diffusion_model.double_blocks.7.img_attn.qkv.bias model.diffusion_model.double_blocks.7.img_attn.qkv.weight model.diffusion_model.double_blocks.7.img_mlp.0.bias model.diffusion_model.double_blocks.7.img_mlp.0.weight model.diffusion_model.double_blocks.7.img_mlp.2.bias model.diffusion_model.double_blocks.7.img_mlp.2.weight model.diffusion_model.double_blocks.7.img_mod.lin.bias model.diffusion_model.double_blocks.7.img_mod.lin.weight model.diffusion_model.double_blocks.7.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.7.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.7.txt_attn.proj.bias model.diffusion_model.double_blocks.7.txt_attn.proj.weight model.diffusion_model.double_blocks.7.txt_attn.qkv.bias model.diffusion_model.double_blocks.7.txt_attn.qkv.weight model.diffusion_model.double_blocks.7.txt_mlp.0.bias model.diffusion_model.double_blocks.7.txt_mlp.0.weight model.diffusion_model.double_blocks.7.txt_mlp.2.bias model.diffusion_model.double_blocks.7.txt_mlp.2.weight model.diffusion_model.double_blocks.7.txt_mod.lin.bias model.diffusion_model.double_blocks.7.txt_mod.lin.weight model.diffusion_model.double_blocks.8.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.8.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.8.img_attn.proj.bias model.diffusion_model.double_blocks.8.img_attn.proj.weight model.diffusion_model.double_blocks.8.img_attn.qkv.bias model.diffusion_model.double_blocks.8.img_attn.qkv.weight model.diffusion_model.double_blocks.8.img_mlp.0.bias model.diffusion_model.double_blocks.8.img_mlp.0.weight model.diffusion_model.double_blocks.8.img_mlp.2.bias model.diffusion_model.double_blocks.8.img_mlp.2.weight model.diffusion_model.double_blocks.8.img_mod.lin.bias model.diffusion_model.double_blocks.8.img_mod.lin.weight model.diffusion_model.double_blocks.8.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.8.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.8.txt_attn.proj.bias model.diffusion_model.double_blocks.8.txt_attn.proj.weight model.diffusion_model.double_blocks.8.txt_attn.qkv.bias model.diffusion_model.double_blocks.8.txt_attn.qkv.weight model.diffusion_model.double_blocks.8.txt_mlp.0.bias model.diffusion_model.double_blocks.8.txt_mlp.0.weight model.diffusion_model.double_blocks.8.txt_mlp.2.bias model.diffusion_model.double_blocks.8.txt_mlp.2.weight model.diffusion_model.double_blocks.8.txt_mod.lin.bias model.diffusion_model.double_blocks.8.txt_mod.lin.weight model.diffusion_model.double_blocks.9.img_attn.norm.key_norm.scale model.diffusion_model.double_blocks.9.img_attn.norm.query_norm.scale model.diffusion_model.double_blocks.9.img_attn.proj.bias model.diffusion_model.double_blocks.9.img_attn.proj.weight model.diffusion_model.double_blocks.9.img_attn.qkv.bias model.diffusion_model.double_blocks.9.img_attn.qkv.weight model.diffusion_model.double_blocks.9.img_mlp.0.bias model.diffusion_model.double_blocks.9.img_mlp.0.weight model.diffusion_model.double_blocks.9.img_mlp.2.bias model.diffusion_model.double_blocks.9.img_mlp.2.weight model.diffusion_model.double_blocks.9.img_mod.lin.bias model.diffusion_model.double_blocks.9.img_mod.lin.weight model.diffusion_model.double_blocks.9.txt_attn.norm.key_norm.scale model.diffusion_model.double_blocks.9.txt_attn.norm.query_norm.scale model.diffusion_model.double_blocks.9.txt_attn.proj.bias model.diffusion_model.double_blocks.9.txt_attn.proj.weight model.diffusion_model.double_blocks.9.txt_attn.qkv.bias model.diffusion_model.double_blocks.9.txt_attn.qkv.weight model.diffusion_model.double_blocks.9.txt_mlp.0.bias model.diffusion_model.double_blocks.9.txt_mlp.0.weight model.diffusion_model.double_blocks.9.txt_mlp.2.bias model.diffusion_model.double_blocks.9.txt_mlp.2.weight model.diffusion_model.double_blocks.9.txt_mod.lin.bias model.diffusion_model.double_blocks.9.txt_mod.lin.weight model.diffusion_model.final_layer.adaLN_modulation.1.bias model.diffusion_model.final_layer.adaLN_modulation.1.weight model.diffusion_model.final_layer.linear.bias model.diffusion_model.final_layer.linear.weight model.diffusion_model.guidance_in.in_layer.bias model.diffusion_model.guidance_in.in_layer.weight model.diffusion_model.guidance_in.out_layer.bias model.diffusion_model.guidance_in.out_layer.weight model.diffusion_model.img_in.bias model.diffusion_model.img_in.weight model.diffusion_model.single_blocks.0.linear1.bias model.diffusion_model.single_blocks.0.linear1.weight model.diffusion_model.single_blocks.0.linear2.bias model.diffusion_model.single_blocks.0.linear2.weight model.diffusion_model.single_blocks.0.modulation.lin.bias model.diffusion_model.single_blocks.0.modulation.lin.weight model.diffusion_model.single_blocks.0.norm.key_norm.scale model.diffusion_model.single_blocks.0.norm.query_norm.scale model.diffusion_model.single_blocks.1.linear1.bias model.diffusion_model.single_blocks.1.linear1.weight model.diffusion_model.single_blocks.1.linear2.bias model.diffusion_model.single_blocks.1.linear2.weight model.diffusion_model.single_blocks.1.modulation.lin.bias model.diffusion_model.single_blocks.1.modulation.lin.weight model.diffusion_model.single_blocks.1.norm.key_norm.scale model.diffusion_model.single_blocks.1.norm.query_norm.scale model.diffusion_model.single_blocks.10.linear1.bias model.diffusion_model.single_blocks.10.linear1.weight model.diffusion_model.single_blocks.10.linear2.bias model.diffusion_model.single_blocks.10.linear2.weight model.diffusion_model.single_blocks.10.modulation.lin.bias model.diffusion_model.single_blocks.10.modulation.lin.weight model.diffusion_model.single_blocks.10.norm.key_norm.scale model.diffusion_model.single_blocks.10.norm.query_norm.scale model.diffusion_model.single_blocks.11.linear1.bias model.diffusion_model.single_blocks.11.linear1.weight model.diffusion_model.single_blocks.11.linear2.bias model.diffusion_model.single_blocks.11.linear2.weight model.diffusion_model.single_blocks.11.modulation.lin.bias model.diffusion_model.single_blocks.11.modulation.lin.weight model.diffusion_model.single_blocks.11.norm.key_norm.scale model.diffusion_model.single_blocks.11.norm.query_norm.scale model.diffusion_model.single_blocks.12.linear1.bias model.diffusion_model.single_blocks.12.linear1.weight model.diffusion_model.single_blocks.12.linear2.bias model.diffusion_model.single_blocks.12.linear2.weight model.diffusion_model.single_blocks.12.modulation.lin.bias model.diffusion_model.single_blocks.12.modulation.lin.weight model.diffusion_model.single_blocks.12.norm.key_norm.scale model.diffusion_model.single_blocks.12.norm.query_norm.scale model.diffusion_model.single_blocks.13.linear1.bias model.diffusion_model.single_blocks.13.linear1.weight model.diffusion_model.single_blocks.13.linear2.bias model.diffusion_model.single_blocks.13.linear2.weight model.diffusion_model.single_blocks.13.modulation.lin.bias model.diffusion_model.single_blocks.13.modulation.lin.weight model.diffusion_model.single_blocks.13.norm.key_norm.scale model.diffusion_model.single_blocks.13.norm.query_norm.scale model.diffusion_model.single_blocks.14.linear1.bias model.diffusion_model.single_blocks.14.linear1.weight model.diffusion_model.single_blocks.14.linear2.bias model.diffusion_model.single_blocks.14.linear2.weight model.diffusion_model.single_blocks.14.modulation.lin.bias model.diffusion_model.single_blocks.14.modulation.lin.weight model.diffusion_model.single_blocks.14.norm.key_norm.scale model.diffusion_model.single_blocks.14.norm.query_norm.scale model.diffusion_model.single_blocks.15.linear1.bias model.diffusion_model.single_blocks.15.linear1.weight model.diffusion_model.single_blocks.15.linear2.bias model.diffusion_model.single_blocks.15.linear2.weight model.diffusion_model.single_blocks.15.modulation.lin.bias model.diffusion_model.single_blocks.15.modulation.lin.weight model.diffusion_model.single_blocks.15.norm.key_norm.scale model.diffusion_model.single_blocks.15.norm.query_norm.scale model.diffusion_model.single_blocks.16.linear1.bias model.diffusion_model.single_blocks.16.linear1.weight model.diffusion_model.single_blocks.16.linear2.bias model.diffusion_model.single_blocks.16.linear2.weight model.diffusion_model.single_blocks.16.modulation.lin.bias model.diffusion_model.single_blocks.16.modulation.lin.weight model.diffusion_model.single_blocks.16.norm.key_norm.scale model.diffusion_model.single_blocks.16.norm.query_norm.scale model.diffusion_model.single_blocks.17.linear1.bias model.diffusion_model.single_blocks.17.linear1.weight model.diffusion_model.single_blocks.17.linear2.bias model.diffusion_model.single_blocks.17.linear2.weight model.diffusion_model.single_blocks.17.modulation.lin.bias model.diffusion_model.single_blocks.17.modulation.lin.weight model.diffusion_model.single_blocks.17.norm.key_norm.scale model.diffusion_model.single_blocks.17.norm.query_norm.scale model.diffusion_model.single_blocks.18.linear1.bias model.diffusion_model.single_blocks.18.linear1.weight model.diffusion_model.single_blocks.18.linear2.bias model.diffusion_model.single_blocks.18.linear2.weight model.diffusion_model.single_blocks.18.modulation.lin.bias model.diffusion_model.single_blocks.18.modulation.lin.weight model.diffusion_model.single_blocks.18.norm.key_norm.scale model.diffusion_model.single_blocks.18.norm.query_norm.scale model.diffusion_model.single_blocks.19.linear1.bias model.diffusion_model.single_blocks.19.linear1.weight model.diffusion_model.single_blocks.19.linear2.bias model.diffusion_model.single_blocks.19.linear2.weight model.diffusion_model.single_blocks.19.modulation.lin.bias model.diffusion_model.single_blocks.19.modulation.lin.weight model.diffusion_model.single_blocks.19.norm.key_norm.scale model.diffusion_model.single_blocks.19.norm.query_norm.scale model.diffusion_model.single_blocks.2.linear1.bias model.diffusion_model.single_blocks.2.linear1.weight model.diffusion_model.single_blocks.2.linear2.bias model.diffusion_model.single_blocks.2.linear2.weight model.diffusion_model.single_blocks.2.modulation.lin.bias model.diffusion_model.single_blocks.2.modulation.lin.weight model.diffusion_model.single_blocks.2.norm.key_norm.scale model.diffusion_model.single_blocks.2.norm.query_norm.scale model.diffusion_model.single_blocks.20.linear1.bias model.diffusion_model.single_blocks.20.linear1.weight model.diffusion_model.single_blocks.20.linear2.bias model.diffusion_model.single_blocks.20.linear2.weight model.diffusion_model.single_blocks.20.modulation.lin.bias model.diffusion_model.single_blocks.20.modulation.lin.weight model.diffusion_model.single_blocks.20.norm.key_norm.scale model.diffusion_model.single_blocks.20.norm.query_norm.scale model.diffusion_model.single_blocks.21.linear1.bias model.diffusion_model.single_blocks.21.linear1.weight model.diffusion_model.single_blocks.21.linear2.bias model.diffusion_model.single_blocks.21.linear2.weight model.diffusion_model.single_blocks.21.modulation.lin.bias model.diffusion_model.single_blocks.21.modulation.lin.weight model.diffusion_model.single_blocks.21.norm.key_norm.scale model.diffusion_model.single_blocks.21.norm.query_norm.scale model.diffusion_model.single_blocks.22.linear1.bias model.diffusion_model.single_blocks.22.linear1.weight model.diffusion_model.single_blocks.22.linear2.bias model.diffusion_model.single_blocks.22.linear2.weight model.diffusion_model.single_blocks.22.modulation.lin.bias model.diffusion_model.single_blocks.22.modulation.lin.weight model.diffusion_model.single_blocks.22.norm.key_norm.scale model.diffusion_model.single_blocks.22.norm.query_norm.scale model.diffusion_model.single_blocks.23.linear1.bias model.diffusion_model.single_blocks.23.linear1.weight model.diffusion_model.single_blocks.23.linear2.bias model.diffusion_model.single_blocks.23.linear2.weight model.diffusion_model.single_blocks.23.modulation.lin.bias model.diffusion_model.single_blocks.23.modulation.lin.weight model.diffusion_model.single_blocks.23.norm.key_norm.scale model.diffusion_model.single_blocks.23.norm.query_norm.scale model.diffusion_model.single_blocks.24.linear1.bias model.diffusion_model.single_blocks.24.linear1.weight model.diffusion_model.single_blocks.24.linear2.bias model.diffusion_model.single_blocks.24.linear2.weight model.diffusion_model.single_blocks.24.modulation.lin.bias model.diffusion_model.single_blocks.24.modulation.lin.weight model.diffusion_model.single_blocks.24.norm.key_norm.scale model.diffusion_model.single_blocks.24.norm.query_norm.scale model.diffusion_model.single_blocks.25.linear1.bias model.diffusion_model.single_blocks.25.linear1.weight model.diffusion_model.single_blocks.25.linear2.bias model.diffusion_model.single_blocks.25.linear2.weight model.diffusion_model.single_blocks.25.modulation.lin.bias model.diffusion_model.single_blocks.25.modulation.lin.weight model.diffusion_model.single_blocks.25.norm.key_norm.scale model.diffusion_model.single_blocks.25.norm.query_norm.scale model.diffusion_model.single_blocks.26.linear1.bias model.diffusion_model.single_blocks.26.linear1.weight model.diffusion_model.single_blocks.26.linear2.bias model.diffusion_model.single_blocks.26.linear2.weight model.diffusion_model.single_blocks.26.modulation.lin.bias model.diffusion_model.single_blocks.26.modulation.lin.weight model.diffusion_model.single_blocks.26.norm.key_norm.scale model.diffusion_model.single_blocks.26.norm.query_norm.scale model.diffusion_model.single_blocks.27.linear1.bias model.diffusion_model.single_blocks.27.linear1.weight model.diffusion_model.single_blocks.27.linear2.bias model.diffusion_model.single_blocks.27.linear2.weight model.diffusion_model.single_blocks.27.modulation.lin.bias model.diffusion_model.single_blocks.27.modulation.lin.weight model.diffusion_model.single_blocks.27.norm.key_norm.scale model.diffusion_model.single_blocks.27.norm.query_norm.scale model.diffusion_model.single_blocks.28.linear1.bias model.diffusion_model.single_blocks.28.linear1.weight model.diffusion_model.single_blocks.28.linear2.bias model.diffusion_model.single_blocks.28.linear2.weight model.diffusion_model.single_blocks.28.modulation.lin.bias model.diffusion_model.single_blocks.28.modulation.lin.weight model.diffusion_model.single_blocks.28.norm.key_norm.scale model.diffusion_model.single_blocks.28.norm.query_norm.scale model.diffusion_model.single_blocks.29.linear1.bias model.diffusion_model.single_blocks.29.linear1.weight model.diffusion_model.single_blocks.29.linear2.bias model.diffusion_model.single_blocks.29.linear2.weight model.diffusion_model.single_blocks.29.modulation.lin.bias model.diffusion_model.single_blocks.29.modulation.lin.weight model.diffusion_model.single_blocks.29.norm.key_norm.scale model.diffusion_model.single_blocks.29.norm.query_norm.scale model.diffusion_model.single_blocks.3.linear1.bias model.diffusion_model.single_blocks.3.linear1.weight model.diffusion_model.single_blocks.3.linear2.bias model.diffusion_model.single_blocks.3.linear2.weight model.diffusion_model.single_blocks.3.modulation.lin.bias model.diffusion_model.single_blocks.3.modulation.lin.weight model.diffusion_model.single_blocks.3.norm.key_norm.scale model.diffusion_model.single_blocks.3.norm.query_norm.scale model.diffusion_model.single_blocks.30.linear1.bias model.diffusion_model.single_blocks.30.linear1.weight model.diffusion_model.single_blocks.30.linear2.bias model.diffusion_model.single_blocks.30.linear2.weight model.diffusion_model.single_blocks.30.modulation.lin.bias model.diffusion_model.single_blocks.30.modulation.lin.weight model.diffusion_model.single_blocks.30.norm.key_norm.scale model.diffusion_model.single_blocks.30.norm.query_norm.scale model.diffusion_model.single_blocks.31.linear1.bias model.diffusion_model.single_blocks.31.linear1.weight model.diffusion_model.single_blocks.31.linear2.bias model.diffusion_model.single_blocks.31.linear2.weight model.diffusion_model.single_blocks.31.modulation.lin.bias model.diffusion_model.single_blocks.31.modulation.lin.weight model.diffusion_model.single_blocks.31.norm.key_norm.scale model.diffusion_model.single_blocks.31.norm.query_norm.scale model.diffusion_model.single_blocks.32.linear1.bias model.diffusion_model.single_blocks.32.linear1.weight model.diffusion_model.single_blocks.32.linear2.bias model.diffusion_model.single_blocks.32.linear2.weight model.diffusion_model.single_blocks.32.modulation.lin.bias model.diffusion_model.single_blocks.32.modulation.lin.weight model.diffusion_model.single_blocks.32.norm.key_norm.scale model.diffusion_model.single_blocks.32.norm.query_norm.scale model.diffusion_model.single_blocks.33.linear1.bias model.diffusion_model.single_blocks.33.linear1.weight model.diffusion_model.single_blocks.33.linear2.bias model.diffusion_model.single_blocks.33.linear2.weight model.diffusion_model.single_blocks.33.modulation.lin.bias model.diffusion_model.single_blocks.33.modulation.lin.weight model.diffusion_model.single_blocks.33.norm.key_norm.scale model.diffusion_model.single_blocks.33.norm.query_norm.scale model.diffusion_model.single_blocks.34.linear1.bias model.diffusion_model.single_blocks.34.linear1.weight model.diffusion_model.single_blocks.34.linear2.bias model.diffusion_model.single_blocks.34.linear2.weight model.diffusion_model.single_blocks.34.modulation.lin.bias model.diffusion_model.single_blocks.34.modulation.lin.weight model.diffusion_model.single_blocks.34.norm.key_norm.scale model.diffusion_model.single_blocks.34.norm.query_norm.scale model.diffusion_model.single_blocks.35.linear1.bias model.diffusion_model.single_blocks.35.linear1.weight model.diffusion_model.single_blocks.35.linear2.bias model.diffusion_model.single_blocks.35.linear2.weight model.diffusion_model.single_blocks.35.modulation.lin.bias model.diffusion_model.single_blocks.35.modulation.lin.weight model.diffusion_model.single_blocks.35.norm.key_norm.scale model.diffusion_model.single_blocks.35.norm.query_norm.scale model.diffusion_model.single_blocks.36.linear1.bias model.diffusion_model.single_blocks.36.linear1.weight model.diffusion_model.single_blocks.36.linear2.bias model.diffusion_model.single_blocks.36.linear2.weight model.diffusion_model.single_blocks.36.modulation.lin.bias model.diffusion_model.single_blocks.36.modulation.lin.weight model.diffusion_model.single_blocks.36.norm.key_norm.scale model.diffusion_model.single_blocks.36.norm.query_norm.scale model.diffusion_model.single_blocks.37.linear1.bias model.diffusion_model.single_blocks.37.linear1.weight model.diffusion_model.single_blocks.37.linear2.bias model.diffusion_model.single_blocks.37.linear2.weight model.diffusion_model.single_blocks.37.modulation.lin.bias model.diffusion_model.single_blocks.37.modulation.lin.weight model.diffusion_model.single_blocks.37.norm.key_norm.scale model.diffusion_model.single_blocks.37.norm.query_norm.scale model.diffusion_model.single_blocks.4.linear1.bias model.diffusion_model.single_blocks.4.linear1.weight model.diffusion_model.single_blocks.4.linear2.bias model.diffusion_model.single_blocks.4.linear2.weight model.diffusion_model.single_blocks.4.modulation.lin.bias model.diffusion_model.single_blocks.4.modulation.lin.weight model.diffusion_model.single_blocks.4.norm.key_norm.scale model.diffusion_model.single_blocks.4.norm.query_norm.scale model.diffusion_model.single_blocks.5.linear1.bias model.diffusion_model.single_blocks.5.linear1.weight model.diffusion_model.single_blocks.5.linear2.bias model.diffusion_model.single_blocks.5.linear2.weight model.diffusion_model.single_blocks.5.modulation.lin.bias model.diffusion_model.single_blocks.5.modulation.lin.weight model.diffusion_model.single_blocks.5.norm.key_norm.scale model.diffusion_model.single_blocks.5.norm.query_norm.scale model.diffusion_model.single_blocks.6.linear1.bias model.diffusion_model.single_blocks.6.linear1.weight model.diffusion_model.single_blocks.6.linear2.bias model.diffusion_model.single_blocks.6.linear2.weight model.diffusion_model.single_blocks.6.modulation.lin.bias model.diffusion_model.single_blocks.6.modulation.lin.weight model.diffusion_model.single_blocks.6.norm.key_norm.scale model.diffusion_model.single_blocks.6.norm.query_norm.scale model.diffusion_model.single_blocks.7.linear1.bias model.diffusion_model.single_blocks.7.linear1.weight model.diffusion_model.single_blocks.7.linear2.bias model.diffusion_model.single_blocks.7.linear2.weight model.diffusion_model.single_blocks.7.modulation.lin.bias model.diffusion_model.single_blocks.7.modulation.lin.weight model.diffusion_model.single_blocks.7.norm.key_norm.scale model.diffusion_model.single_blocks.7.norm.query_norm.scale model.diffusion_model.single_blocks.8.linear1.bias model.diffusion_model.single_blocks.8.linear1.weight model.diffusion_model.single_blocks.8.linear2.bias model.diffusion_model.single_blocks.8.linear2.weight model.diffusion_model.single_blocks.8.modulation.lin.bias model.diffusion_model.single_blocks.8.modulation.lin.weight model.diffusion_model.single_blocks.8.norm.key_norm.scale model.diffusion_model.single_blocks.8.norm.query_norm.scale model.diffusion_model.single_blocks.9.linear1.bias model.diffusion_model.single_blocks.9.linear1.weight model.diffusion_model.single_blocks.9.linear2.bias model.diffusion_model.single_blocks.9.linear2.weight model.diffusion_model.single_blocks.9.modulation.lin.bias model.diffusion_model.single_blocks.9.modulation.lin.weight model.diffusion_model.single_blocks.9.norm.key_norm.scale model.diffusion_model.single_blocks.9.norm.query_norm.scale model.diffusion_model.time_in.in_layer.bias model.diffusion_model.time_in.in_layer.weight model.diffusion_model.time_in.out_layer.bias model.diffusion_model.time_in.out_layer.weight model.diffusion_model.txt_in.bias model.diffusion_model.txt_in.weight model.diffusion_model.vector_in.in_layer.bias model.diffusion_model.vector_in.in_layer.weight model.diffusion_model.vector_in.out_layer.bias model.diffusion_model.vector_in.out_layer.weight text_encoders.clip_l.logit_scale text_encoders.clip_l.transformer.text_model.embeddings.position_embedding.weight text_encoders.clip_l.transformer.text_model.embeddings.token_embedding.weight text_encoders.clip_l.transformer.text_model.encoder.layers.0.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.0.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.0.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.0.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.0.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.0.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.0.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.0.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.0.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.0.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.0.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.0.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.0.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.0.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.0.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.0.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.1.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.1.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.1.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.1.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.1.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.1.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.1.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.1.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.1.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.1.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.1.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.1.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.1.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.1.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.1.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.1.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.10.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.10.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.10.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.10.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.10.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.10.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.10.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.10.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.10.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.10.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.10.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.10.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.10.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.10.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.10.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.10.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.11.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.11.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.11.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.11.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.11.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.11.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.11.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.11.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.11.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.11.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.11.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.2.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.2.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.2.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.2.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.2.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.2.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.2.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.2.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.2.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.2.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.2.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.2.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.2.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.2.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.2.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.3.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.3.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.3.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.3.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.3.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.3.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.3.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.3.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.3.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.3.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.3.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.3.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.3.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.3.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.3.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.3.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.4.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.4.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.4.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.4.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.4.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.4.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.4.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.4.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.4.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.4.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.4.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.4.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.4.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.4.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.4.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.4.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.5.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.5.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.5.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.5.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.5.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.5.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.5.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.5.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.5.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.5.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.5.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.5.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.5.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.5.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.5.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.5.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.6.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.6.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.6.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.6.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.6.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.6.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.6.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.6.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.6.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.6.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.6.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.6.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.6.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.6.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.6.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.6.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.7.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.7.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.7.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.7.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.7.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.7.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.7.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.7.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.7.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.7.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.7.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.8.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.8.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.8.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.8.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.8.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.8.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.8.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.8.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.9.layer_norm1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.9.layer_norm1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.9.layer_norm2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.9.layer_norm2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.9.mlp.fc1.bias text_encoders.clip_l.transformer.text_model.encoder.layers.9.mlp.fc1.weight text_encoders.clip_l.transformer.text_model.encoder.layers.9.mlp.fc2.bias text_encoders.clip_l.transformer.text_model.encoder.layers.9.mlp.fc2.weight text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight text_encoders.clip_l.transformer.text_model.final_layer_norm.bias text_encoders.clip_l.transformer.text_model.final_layer_norm.weight text_encoders.clip_l.transformer.text_projection.weight text_encoders.t5xxl.logit_scale text_encoders.t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight text_encoders.t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.0.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.0.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.1.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.1.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.10.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.10.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.11.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.11.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.12.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.12.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.13.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.13.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.14.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.14.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.15.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.15.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.16.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.16.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.17.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.17.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.18.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.18.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.19.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.19.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.2.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.2.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.20.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.20.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.21.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.21.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.22.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.22.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.23.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.23.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.3.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.3.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.4.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.4.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.5.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.5.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.6.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.6.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.7.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.7.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.8.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.8.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.k.weight text_encoders.t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.o.weight text_encoders.t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.q.weight text_encoders.t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.v.weight text_encoders.t5xxl.transformer.encoder.block.9.layer.0.layer_norm.weight text_encoders.t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wi_0.weight text_encoders.t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wi_1.weight text_encoders.t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wo.weight text_encoders.t5xxl.transformer.encoder.block.9.layer.1.layer_norm.weight text_encoders.t5xxl.transformer.encoder.final_layer_norm.weight text_encoders.t5xxl.transformer.shared.weight vae.decoder.conv_in.bias vae.decoder.conv_in.weight vae.decoder.conv_out.bias vae.decoder.conv_out.weight vae.decoder.mid.attn_1.k.bias vae.decoder.mid.attn_1.k.weight vae.decoder.mid.attn_1.norm.bias vae.decoder.mid.attn_1.norm.weight vae.decoder.mid.attn_1.proj_out.bias vae.decoder.mid.attn_1.proj_out.weight vae.decoder.mid.attn_1.q.bias vae.decoder.mid.attn_1.q.weight vae.decoder.mid.attn_1.v.bias vae.decoder.mid.attn_1.v.weight vae.decoder.mid.block_1.conv1.bias vae.decoder.mid.block_1.conv1.weight vae.decoder.mid.block_1.conv2.bias vae.decoder.mid.block_1.conv2.weight vae.decoder.mid.block_1.norm1.bias vae.decoder.mid.block_1.norm1.weight vae.decoder.mid.block_1.norm2.bias vae.decoder.mid.block_1.norm2.weight vae.decoder.mid.block_2.conv1.bias vae.decoder.mid.block_2.conv1.weight vae.decoder.mid.block_2.conv2.bias vae.decoder.mid.block_2.conv2.weight vae.decoder.mid.block_2.norm1.bias vae.decoder.mid.block_2.norm1.weight vae.decoder.mid.block_2.norm2.bias vae.decoder.mid.block_2.norm2.weight vae.decoder.norm_out.bias vae.decoder.norm_out.weight vae.decoder.up.0.block.0.conv1.bias vae.decoder.up.0.block.0.conv1.weight vae.decoder.up.0.block.0.conv2.bias vae.decoder.up.0.block.0.conv2.weight vae.decoder.up.0.block.0.nin_shortcut.bias vae.decoder.up.0.block.0.nin_shortcut.weight vae.decoder.up.0.block.0.norm1.bias vae.decoder.up.0.block.0.norm1.weight vae.decoder.up.0.block.0.norm2.bias vae.decoder.up.0.block.0.norm2.weight vae.decoder.up.0.block.1.conv1.bias vae.decoder.up.0.block.1.conv1.weight vae.decoder.up.0.block.1.conv2.bias vae.decoder.up.0.block.1.conv2.weight vae.decoder.up.0.block.1.norm1.bias vae.decoder.up.0.block.1.norm1.weight vae.decoder.up.0.block.1.norm2.bias vae.decoder.up.0.block.1.norm2.weight vae.decoder.up.0.block.2.conv1.bias vae.decoder.up.0.block.2.conv1.weight vae.decoder.up.0.block.2.conv2.bias vae.decoder.up.0.block.2.conv2.weight vae.decoder.up.0.block.2.norm1.bias vae.decoder.up.0.block.2.norm1.weight vae.decoder.up.0.block.2.norm2.bias vae.decoder.up.0.block.2.norm2.weight vae.decoder.up.1.block.0.conv1.bias vae.decoder.up.1.block.0.conv1.weight vae.decoder.up.1.block.0.conv2.bias vae.decoder.up.1.block.0.conv2.weight vae.decoder.up.1.block.0.nin_shortcut.bias vae.decoder.up.1.block.0.nin_shortcut.weight vae.decoder.up.1.block.0.norm1.bias vae.decoder.up.1.block.0.norm1.weight vae.decoder.up.1.block.0.norm2.bias vae.decoder.up.1.block.0.norm2.weight vae.decoder.up.1.block.1.conv1.bias vae.decoder.up.1.block.1.conv1.weight vae.decoder.up.1.block.1.conv2.bias vae.decoder.up.1.block.1.conv2.weight vae.decoder.up.1.block.1.norm1.bias vae.decoder.up.1.block.1.norm1.weight vae.decoder.up.1.block.1.norm2.bias vae.decoder.up.1.block.1.norm2.weight vae.decoder.up.1.block.2.conv1.bias vae.decoder.up.1.block.2.conv1.weight vae.decoder.up.1.block.2.conv2.bias vae.decoder.up.1.block.2.conv2.weight vae.decoder.up.1.block.2.norm1.bias vae.decoder.up.1.block.2.norm1.weight vae.decoder.up.1.block.2.norm2.bias vae.decoder.up.1.block.2.norm2.weight vae.decoder.up.1.upsample.conv.bias vae.decoder.up.1.upsample.conv.weight vae.decoder.up.2.block.0.conv1.bias vae.decoder.up.2.block.0.conv1.weight vae.decoder.up.2.block.0.conv2.bias vae.decoder.up.2.block.0.conv2.weight vae.decoder.up.2.block.0.norm1.bias vae.decoder.up.2.block.0.norm1.weight vae.decoder.up.2.block.0.norm2.bias vae.decoder.up.2.block.0.norm2.weight vae.decoder.up.2.block.1.conv1.bias vae.decoder.up.2.block.1.conv1.weight vae.decoder.up.2.block.1.conv2.bias vae.decoder.up.2.block.1.conv2.weight vae.decoder.up.2.block.1.norm1.bias vae.decoder.up.2.block.1.norm1.weight vae.decoder.up.2.block.1.norm2.bias vae.decoder.up.2.block.1.norm2.weight vae.decoder.up.2.block.2.conv1.bias vae.decoder.up.2.block.2.conv1.weight vae.decoder.up.2.block.2.conv2.bias vae.decoder.up.2.block.2.conv2.weight vae.decoder.up.2.block.2.norm1.bias vae.decoder.up.2.block.2.norm1.weight vae.decoder.up.2.block.2.norm2.bias vae.decoder.up.2.block.2.norm2.weight vae.decoder.up.2.upsample.conv.bias vae.decoder.up.2.upsample.conv.weight vae.decoder.up.3.block.0.conv1.bias vae.decoder.up.3.block.0.conv1.weight vae.decoder.up.3.block.0.conv2.bias vae.decoder.up.3.block.0.conv2.weight vae.decoder.up.3.block.0.norm1.bias vae.decoder.up.3.block.0.norm1.weight vae.decoder.up.3.block.0.norm2.bias vae.decoder.up.3.block.0.norm2.weight vae.decoder.up.3.block.1.conv1.bias vae.decoder.up.3.block.1.conv1.weight vae.decoder.up.3.block.1.conv2.bias vae.decoder.up.3.block.1.conv2.weight vae.decoder.up.3.block.1.norm1.bias vae.decoder.up.3.block.1.norm1.weight vae.decoder.up.3.block.1.norm2.bias vae.decoder.up.3.block.1.norm2.weight vae.decoder.up.3.block.2.conv1.bias vae.decoder.up.3.block.2.conv1.weight vae.decoder.up.3.block.2.conv2.bias vae.decoder.up.3.block.2.conv2.weight vae.decoder.up.3.block.2.norm1.bias vae.decoder.up.3.block.2.norm1.weight vae.decoder.up.3.block.2.norm2.bias vae.decoder.up.3.block.2.norm2.weight vae.decoder.up.3.upsample.conv.bias vae.decoder.up.3.upsample.conv.weight vae.encoder.conv_in.bias vae.encoder.conv_in.weight vae.encoder.conv_out.bias vae.encoder.conv_out.weight vae.encoder.down.0.block.0.conv1.bias vae.encoder.down.0.block.0.conv1.weight vae.encoder.down.0.block.0.conv2.bias vae.encoder.down.0.block.0.conv2.weight vae.encoder.down.0.block.0.norm1.bias vae.encoder.down.0.block.0.norm1.weight vae.encoder.down.0.block.0.norm2.bias vae.encoder.down.0.block.0.norm2.weight vae.encoder.down.0.block.1.conv1.bias vae.encoder.down.0.block.1.conv1.weight vae.encoder.down.0.block.1.conv2.bias vae.encoder.down.0.block.1.conv2.weight vae.encoder.down.0.block.1.norm1.bias vae.encoder.down.0.block.1.norm1.weight vae.encoder.down.0.block.1.norm2.bias vae.encoder.down.0.block.1.norm2.weight vae.encoder.down.0.downsample.conv.bias vae.encoder.down.0.downsample.conv.weight vae.encoder.down.1.block.0.conv1.bias vae.encoder.down.1.block.0.conv1.weight vae.encoder.down.1.block.0.conv2.bias vae.encoder.down.1.block.0.conv2.weight vae.encoder.down.1.block.0.nin_shortcut.bias vae.encoder.down.1.block.0.nin_shortcut.weight vae.encoder.down.1.block.0.norm1.bias vae.encoder.down.1.block.0.norm1.weight vae.encoder.down.1.block.0.norm2.bias vae.encoder.down.1.block.0.norm2.weight vae.encoder.down.1.block.1.conv1.bias vae.encoder.down.1.block.1.conv1.weight vae.encoder.down.1.block.1.conv2.bias vae.encoder.down.1.block.1.conv2.weight vae.encoder.down.1.block.1.norm1.bias vae.encoder.down.1.block.1.norm1.weight vae.encoder.down.1.block.1.norm2.bias vae.encoder.down.1.block.1.norm2.weight vae.encoder.down.1.downsample.conv.bias vae.encoder.down.1.downsample.conv.weight vae.encoder.down.2.block.0.conv1.bias vae.encoder.down.2.block.0.conv1.weight vae.encoder.down.2.block.0.conv2.bias vae.encoder.down.2.block.0.conv2.weight vae.encoder.down.2.block.0.nin_shortcut.bias vae.encoder.down.2.block.0.nin_shortcut.weight vae.encoder.down.2.block.0.norm1.bias vae.encoder.down.2.block.0.norm1.weight vae.encoder.down.2.block.0.norm2.bias vae.encoder.down.2.block.0.norm2.weight vae.encoder.down.2.block.1.conv1.bias vae.encoder.down.2.block.1.conv1.weight vae.encoder.down.2.block.1.conv2.bias vae.encoder.down.2.block.1.conv2.weight vae.encoder.down.2.block.1.norm1.bias vae.encoder.down.2.block.1.norm1.weight vae.encoder.down.2.block.1.norm2.bias vae.encoder.down.2.block.1.norm2.weight vae.encoder.down.2.downsample.conv.bias vae.encoder.down.2.downsample.conv.weight vae.encoder.down.3.block.0.conv1.bias vae.encoder.down.3.block.0.conv1.weight vae.encoder.down.3.block.0.conv2.bias vae.encoder.down.3.block.0.conv2.weight vae.encoder.down.3.block.0.norm1.bias vae.encoder.down.3.block.0.norm1.weight vae.encoder.down.3.block.0.norm2.bias vae.encoder.down.3.block.0.norm2.weight vae.encoder.down.3.block.1.conv1.bias vae.encoder.down.3.block.1.conv1.weight vae.encoder.down.3.block.1.conv2.bias vae.encoder.down.3.block.1.conv2.weight vae.encoder.down.3.block.1.norm1.bias vae.encoder.down.3.block.1.norm1.weight vae.encoder.down.3.block.1.norm2.bias vae.encoder.down.3.block.1.norm2.weight vae.encoder.mid.attn_1.k.bias vae.encoder.mid.attn_1.k.weight vae.encoder.mid.attn_1.norm.bias vae.encoder.mid.attn_1.norm.weight vae.encoder.mid.attn_1.proj_out.bias vae.encoder.mid.attn_1.proj_out.weight vae.encoder.mid.attn_1.q.bias vae.encoder.mid.attn_1.q.weight vae.encoder.mid.attn_1.v.bias vae.encoder.mid.attn_1.v.weight vae.encoder.mid.block_1.conv1.bias vae.encoder.mid.block_1.conv1.weight vae.encoder.mid.block_1.conv2.bias vae.encoder.mid.block_1.conv2.weight vae.encoder.mid.block_1.norm1.bias vae.encoder.mid.block_1.norm1.weight vae.encoder.mid.block_1.norm2.bias vae.encoder.mid.block_1.norm2.weight vae.encoder.mid.block_2.conv1.bias vae.encoder.mid.block_2.conv1.weight vae.encoder.mid.block_2.conv2.bias vae.encoder.mid.block_2.conv2.weight vae.encoder.mid.block_2.norm1.bias vae.encoder.mid.block_2.norm1.weight vae.encoder.mid.block_2.norm2.bias vae.encoder.mid.block_2.norm2.weight vae.encoder.norm_out.bias vae.encoder.norm_out.weight Loading AE Loaded EVA02-CLIP-L-14-336 model config. Shape of rope freq: torch.Size([576, 64]) Loading pretrained EVA02-CLIP-L-14-336 weights (D:\Comfy_UI\ComfyUI\models\clip_vision\EVA02_CLIP_L_336_psz14_s6B.pt). incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin'] Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0 set det-size: (640, 640) Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} loading from pulid_ca loading from pulid_encoder start_merge_step:4 Sampler [Taylor] 's cur_positiveprompts :[' a woman img, wearing a white T-shirt, blue loose hair. wake up in the bed ; 8k,RAW'] Generating ' a woman img, wearing a white T-shirt, blue loose hair. wake up in the bed ; 8k,RAW' with seed 1057119430 Requested to load FluxClipModel Loading 1 new model loaded completely 0.0 4777.53759765625 True start denoise...

smthemex commented 1 month ago

你的模型没有选对,目前只支持2个特定的模型,其他的跑出来是噪点,因为我看你的int fp8走的是非int 流程

tiandaoyuxi commented 1 month ago

你的模型没有选对,目前只支持2个特定的模型,其他的跑出来是噪点,因为我看你的int fp8走的是非int 流程

请问那个模型没选对我去下载下? 我去也回去仔细读读说明

smthemex commented 1 month ago

你先跑通看看 ,麻烦回馈一下,我看看用时多少。

tiandaoyuxi commented 1 month ago

After testing, only 'Kijai/flux-fp8' and "Shakker-Labs/AWPortrait-FL" fp8 can produce images normally in pulid-flux mode, while the other fp8 or nf4 checkpoints are noises; 我无脑给它时间 看看它能出什么。然后我去下载这个模型试试。我目前用的应该是 image 我一直等 等到 报错 或者 结束 来给反馈! 感谢!

smthemex commented 1 month ago

16G这个是comfyUI官方的,目前测试是结果是跑出来噪声,需要kj的11G的那个fp8模型

tiandaoyuxi commented 1 month ago

PUExecutionProvider': {}} loading from pulid_ca loading from pulid_encoder start_merge_step:4 Sampler [Taylor] 's cur_positiveprompts :[' a woman img, wearing a white T-shirt, blue loose hair. wake up in the bed ; 8k,RAW'] Generating ' a woman img, wearing a white T-shirt, blue loose hair. wake up in the bed ; 8k,RAW' with seed 1057119430 Requested to load FluxClipModel Loading 1 new model loaded completely 0.0 4777.53759765625 True start denoise... start decoder... Done in 2950.9s. <class 'list'> <class 'list'> Prompt executed in 3005.84 seconds

image 我去换那个支持模型 感谢答复!!!

smthemex commented 1 month ago

不应该啊,我12G跑只需要800s左右,怎么你4090还要的时间更长了。

tiandaoyuxi commented 1 month ago

不应该啊,我12G跑只需要800s左右,怎么你4090还要的时间更长了。

那就不知道那里的问题了。。很困惑。

smthemex commented 1 month ago

麻烦换kj的模型跑一下测试,如果模型正确,控制台那里是打印 模型的类别是None

tiandaoyuxi commented 1 month ago

image

请问是这样设置吗?我下好了kj正在跑。但是依然很慢。 可以基于这个图看看参数那里设定有错吗?

tiandaoyuxi commented 1 month ago

image

请问是这样设置吗?我下好了kj正在跑。但是依然很慢。 可以基于这个图看看参数那里设定有错吗?

got prompt clip missing: ['text_projection.weight'] run in id number : 1 Please 'pip install apex' Init model in fp8 loading weight_dtype is None Start a requantization process... Model is requantized! Loading AE Loaded EVA02-CLIP-L-14-336 model config. Shape of rope freq: torch.Size([576, 64]) Loading pretrained EVA02-CLIP-L-14-336 weights (D:\Comfy_UI\ComfyUI\models\clip_vision\EVA02_CLIP_L_336_psz14_s6B.pt). incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin'] Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} find model: .\models\antelopev2\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0 set det-size: (640, 640) Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'device_id': '0', 'has_user_compute_stream': '0', 'cudnn_conv1d_pad_to_nc1d': '0', 'user_compute_stream': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'enable_cuda_graph': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': '1', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'tunable_op_max_tuning_duration_ms': '0', 'enable_skip_layer_norm_strict_mode': '0', 'prefer_nhwc': '0', 'use_ep_level_unified_stream': '0', 'use_tf32': '1', 'sdpa_kernel': '0'}, 'CPUExecutionProvider': {}} loading from pulid_ca loading from pulid_encoder start_merge_step:4 Sampler [Taylor] 's cur_positiveprompts :[' a woman img, wearing a white T-shirt, blue loose hair. wake up in the bed ; 8k,RAW'] Generating ' a woman img, wearing a white T-shirt, blue loose hair. wake up in the bed ; 8k,RAW' with seed 1375298303 Requested to load FluxClipModel Loading 1 new model loaded completely 0.0 4777.53759765625 True start denoise...

tiandaoyuxi commented 1 month ago

Prompt executed in 2631.53 seconds 基于上面设定 我跑一次的时间 实在太久了 不知道如何优化。配置 image image

loading from pulid_ca loading from pulid_encoder start_merge_step:4 Sampler [Taylor] 's cur_positiveprompts :[' a woman img, wearing a white T-shirt, blue loose hair. wake up in the bed ; 8k,RAW'] Generating ' a woman img, wearing a white T-shirt, blue loose hair. wake up in the bed ; 8k,RAW' with seed 1375298303 Requested to load FluxClipModel Loading 1 new model loaded completely 0.0 4777.53759765625 True start denoise... start decoder... Done in 2480.2s. <class 'list'> <class 'list'> Prompt executed in 2631.53 seconds

smthemex commented 1 month ago

GPU没有出力啊,我在修复这个BUG

tiandaoyuxi commented 1 month ago

GPU没有出力啊,我在修复这个BUG,

gpu出力了,执行中 gpu100% 这是执行后的 我只是来说明机器配置 执行中 gpu持续峰值 ,但是 跑一次 需要 2631.53 seconds image