ssitu / ComfyUI_fabric

ComfyUI nodes based on the paper "FABRIC: Personalizing Diffusion Models with Iterative Feedback" (Feedback via Attention-Based Reference Image Conditioning)
GNU General Public License v3.0
84 stars 6 forks source link

[Bug] fp16 error #5

Closed diaopal closed 11 months ago

diaopal commented 12 months ago
ComfyUI/ 03:11:24 AM ❯ comfyui --enable-cors-header --preview-method taesd --port 8181 --disable-xformers --dont-upcast-attention
Total VRAM 12287 MB, total RAM 16324 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : native
VAE dtype: torch.bfloat16
disabling upcasting of attention
Using pytorch cross attention

[rgthree] Optimizing ComfyUI reursive execution. If queueing and/or re-queueing seems broken, change "patch_recursive_execution" to false in rgthree_config.json

Prestartup times for custom nodes:
   3.8 seconds: C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy

rgthree's comfy nodes: Loaded 14 exciting nodes.

Import times for custom nodes:
   0.0 seconds: C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_fabric
   0.0 seconds: C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy

Starting server

To see the GUI go to: http://127.0.0.1:8181
got prompt
model_type EPS
adm 0
making attention of type 'vanilla-pytorch' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-pytorch' with 512 in_channels
missing {'cond_stage_model.logit_scale', 'cond_stage_model.text_projection'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'embedding_manager.embedder.transformer.text_model.embeddings.position_embedding.weight', 'embedding_manager.embedder.transformer.text_model.embeddings.position_ids', 'embedding_manager.embedder.transformer.text_model.embeddings.token_embedding.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.0.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.1.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.10.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.11.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.2.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.3.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.4.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.5.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.6.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.layer_norm2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc1.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc1.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc2.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.mlp.fc2.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias', 'embedding_manager.embedder.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight', 'embedding_manager.embedder.transformer.text_model.final_layer_norm.bias', 'embedding_manager.embedder.transformer.text_model.final_layer_norm.weight', 'log_one_minus_alphas_cumprod', 'lora_te_text_model_encoder_layers_0_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_0_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_0_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_0_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_0_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_10_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_10_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_10_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_10_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_11_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_11_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_11_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_11_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_1_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_1_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_1_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_1_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_2_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_2_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_2_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_2_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_3_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_3_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_3_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_3_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_4_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_4_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_4_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_4_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_5_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_5_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_5_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_5_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_6_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_6_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_6_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_6_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_7_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_7_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_7_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_7_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_8_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_8_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_8_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_8_self_attn_v_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc1.alpha', 'lora_te_text_model_encoder_layers_9_mlp_fc1.lora_down.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc1.lora_up.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc2.alpha', 'lora_te_text_model_encoder_layers_9_mlp_fc2.lora_down.weight', 'lora_te_text_model_encoder_layers_9_mlp_fc2.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_k_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_out_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_q_proj.lora_up.weight', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.alpha', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.lora_down.weight', 'lora_te_text_model_encoder_layers_9_self_attn_v_proj.lora_up.weight', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])
making attention of type 'vanilla-pytorch' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-pytorch' with 512 in_channels
loading new
[FABRIC] No reference latents found. Defaulting to regular KSampler.
C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\_utils.py:847: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
loading new
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  1.29it/s]
[FABRIC] 2 positive latents, 1 negative latents
loading new
unload clone 1
  0%|                                                                                                      | 0/3 [00:00<?, ?it/s]
!!! Exception during processing !!!
Traceback (most recent call last):
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_fabric\nodes.py", line 181, in sample
    return KSamplerFABRICAdv().sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_fabric\nodes.py", line 138, in sample
    return fabric_sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_fabric\fabric\fabric.py", line 49, in fabric_sample
    samples = KSamplerAdvanced().sample(model_patched, add_noise, noise_seed, steps, cfg, sampler_name, scheduler, positive,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1270, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 93, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 742, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 580, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 323, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 311, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 289, in sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 263, in calc_cond_uncond_batch
    output = model_options['model_function_wrapper'](model_function, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_fabric\fabric\fabric.py", line 266, in unet_wrapper
    out = model_func(input, ts, **c)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 627, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 56, in forward_timestep_embed
    x = layer(x, context, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 693, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 525, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 123, in checkpoint
    return func(*inputs)
           ^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 588, in _forward
    n = self.attn1(n, context=context_attn1, value=value_attn1)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 469, in forward
    k = self.to_k(context)
        ^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1522, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1531, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Anon\Downloads\Programs\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 18, in forward
    return torch.nn.functional.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != struct c10::Half

Prompt executed in 9.29 seconds

For a workaround, patch comfy.ops.Linear.forward:

-        return torch.nn.functional.linear(input, self.weight, self.bias)
+        return torch.nn.functional.linear(input.to(self.weight.dtype), self.weight, self.bias)
ssitu commented 11 months ago

This should be fixed now with 3709bcfb34a1f57678e271ddd0af34e80bf8a6e1