kijai / ComfyUI-FluxTrainer

Apache License 2.0
439 stars 20 forks source link

error while running the nodes #33

Closed kotaxyz closed 1 month ago

kotaxyz commented 1 month ago

i have rtx 3060ti 8gb vram. 40gb ram just info if this was the issue

                INFO     [Dataset 0]                                                              config_util.py:576
                INFO     loading image sizes.                                                      train_util.py:876

100%|████████████████████████████████████████████████████████████████████████████████| 41/41 [00:00<00:00, 1576.98it/s] INFO make buckets train_util.py:882 INFO number of images (including repeats) / train_util.py:928 各bucketの画像枚数(繰り返し回数を含む) INFO bucket 0: resolution (512, 512), count: 41 train_util.py:933 INFO mean ar error (without repeats): 0.0 train_util.py:938 INFO preparing accelerator train_network.py:338 accelerator device: cuda INFO Building Flux model dev flux_utils.py:22 INFO Loading state dict from flux_utils.py:27 I:\ComfyUI\ComfyUI\models\unet\flux1-dev-fp8.safetensors INFO Loaded Flux: flux_utils.py:30 INFO prepare split model flux_train_network_comfy.py:68 2024-09-03 04:06:56 INFO load state dict for lower flux_train_network_comfy.py:75 2024-09-03 04:08:22 INFO load state dict for upper flux_train_network_comfy.py:80 INFO prepare upper model flux_train_network_comfy.py:83 2024-09-03 04:08:31 INFO split model prepared flux_train_network_comfy.py:98 2024-09-03 04:08:32 INFO Building CLIP flux_utils.py:47 INFO Loading state dict from I:\ComfyUI\ComfyUI\models\clip\clip_l.safetensors flux_utils.py:140 2024-09-03 04:08:33 INFO Loaded CLIP: flux_utils.py:143 2024-09-03 04:08:34 INFO Loading state dict from flux_utils.py:186 I:\ComfyUI\ComfyUI\models\clip\t5xxl_fp8_e4m3fn.safetensors 2024-09-03 04:08:35 INFO Loaded T5xxl: flux_utils.py:189 INFO Building AutoEncoder flux_utils.py:35 INFO Loading state dict from flux_utils.py:39 I:\ComfyUI\ComfyUI\models\vae\FLUX.1-dev.safetensors 2024-09-03 04:08:36 INFO Loaded AE: flux_utils.py:42 _IncompatibleKeys(missing_keys=['encoder.down.0.block.0.norm1.weight', 'encoder.down.0.block.0.norm1.bias', 'encoder.down.0.block.0.conv1.weight', 'encoder.down.0.block.0.conv1.bias', 'encoder.down.0.block.0.norm2.weight', 'encoder.down.0.block.0.norm2.bias', 'encoder.down.0.block.0.conv2.weight', 'encoder.down.0.block.0.conv2.bias', 'encoder.down.0.block.1.norm1.weight', 'encoder.down.0.block.1.norm1.bias', 'encoder.down.0.block.1.conv1.weight', 'encoder.down.0.block.1.conv1.bias', 'encoder.down.0.block.1.norm2.weight', 'encoder.down.0.block.1.norm2.bias', 'encoder.down.0.block.1.conv2.weight', 'encoder.down.0.block.1.conv2.bias', 'encoder.down.0.downsample.conv.weight', 'encoder.down.0.downsample.conv.bias', 'encoder.down.1.block.0.norm1.weight', 'encoder.down.1.block.0.norm1.bias', 'encoder.down.1.block.0.conv1.weight', 'encoder.down.1.block.0.conv1.bias', 'encoder.down.1.block.0.norm2.weight', 'encoder.down.1.block.0.norm2.bias', 'encoder.down.1.block.0.conv2.weight', 'encoder.down.1.block.0.conv2.bias', 'encoder.down.1.block.0.nin_shortcut.weight', 'encoder.down.1.block.0.nin_shortcut.bias', 'encoder.down.1.block.1.norm1.weight', 'encoder.down.1.block.1.norm1.bias', 'encoder.down.1.block.1.conv1.weight', 'encoder.down.1.block.1.conv1.bias', 'encoder.down.1.block.1.norm2.weight', 'encoder.down.1.block.1.norm2.bias', 'encoder.down.1.block.1.conv2.weight', 'encoder.down.1.block.1.conv2.bias', 'encoder.down.1.downsample.conv.weight', 'encoder.down.1.downsample.conv.bias', 'encoder.down.2.block.0.norm1.weight', 'encoder.down.2.block.0.norm1.bias', 'encoder.down.2.block.0.conv1.weight', 'encoder.down.2.block.0.conv1.bias', 'encoder.down.2.block.0.norm2.weight', 'encoder.down.2.block.0.norm2.bias', 'encoder.down.2.block.0.conv2.weight', 'encoder.down.2.block.0.conv2.bias', 'encoder.down.2.block.0.nin_shortcut.weight', 'encoder.down.2.block.0.nin_shortcut.bias', 'encoder.down.2.block.1.norm1.weight', 'encoder.down.2.block.1.norm1.bias', 'encoder.down.2.block.1.conv1.weight', 'encoder.down.2.block.1.conv1.bias', 'encoder.down.2.block.1.norm2.weight', 'encoder.down.2.block.1.norm2.bias', 'encoder.down.2.block.1.conv2.weight', 'encoder.down.2.block.1.conv2.bias', 'encoder.down.2.downsample.conv.weight', 'encoder.down.2.downsample.conv.bias', 'encoder.down.3.block.0.norm1.weight', 'encoder.down.3.block.0.norm1.bias', 'encoder.down.3.block.0.conv1.weight', 'encoder.down.3.block.0.conv1.bias', 'encoder.down.3.block.0.norm2.weight', 'encoder.down.3.block.0.norm2.bias', 'encoder.down.3.block.0.conv2.weight', 'encoder.down.3.block.0.conv2.bias', 'encoder.down.3.block.1.norm1.weight', 'encoder.down.3.block.1.norm1.bias', 'encoder.down.3.block.1.conv1.weight', 'encoder.down.3.block.1.conv1.bias', 'encoder.down.3.block.1.norm2.weight', 'encoder.down.3.block.1.norm2.bias', 'encoder.down.3.block.1.conv2.weight', 'encoder.down.3.block.1.conv2.bias', 'encoder.mid.block_1.norm1.weight', 'encoder.mid.block_1.norm1.bias', 'encoder.mid.block_1.conv1.weight', 'encoder.mid.block_1.conv1.bias', 'encoder.mid.block_1.norm2.weight', 'encoder.mid.block_1.norm2.bias', 'encoder.mid.block_1.conv2.weight', 'encoder.mid.block_1.conv2.bias', 'encoder.mid.attn_1.norm.weight', 'encoder.mid.attn_1.norm.bias', 'encoder.mid.attn_1.q.weight', 'encoder.mid.attn_1.q.bias', 'encoder.mid.attn_1.k.weight', 'encoder.mid.attn_1.k.bias', 'encoder.mid.attn_1.v.weight', 'encoder.mid.attn_1.v.bias', 'encoder.mid.attn_1.proj_out.weight', 'encoder.mid.attn_1.proj_out.bias', 'encoder.mid.block_2.norm1.weight', 'encoder.mid.block_2.norm1.bias', 'encoder.mid.block_2.conv1.weight', 'encoder.mid.block_2.conv1.bias', 'encoder.mid.block_2.norm2.weight', 'encoder.mid.block_2.norm2.bias', 'encoder.mid.block_2.conv2.weight', 'encoder.mid.block_2.conv2.bias', 'encoder.norm_out.weight', 'encoder.norm_out.bias', 'decoder.mid.block_1.norm1.weight', 'decoder.mid.block_1.norm1.bias', 'decoder.mid.block_1.conv1.weight', 'decoder.mid.block_1.conv1.bias', 'decoder.mid.block_1.norm2.weight', 'decoder.mid.block_1.norm2.bias', 'decoder.mid.block_1.conv2.weight', 'decoder.mid.block_1.conv2.bias', 'decoder.mid.attn_1.norm.weight', 'decoder.mid.attn_1.norm.bias', 'decoder.mid.attn_1.q.weight', 'decoder.mid.attn_1.q.bias', 'decoder.mid.attn_1.k.weight', 'decoder.mid.attn_1.k.bias', 'decoder.mid.attn_1.v.weight', 'decoder.mid.attn_1.v.bias', 'decoder.mid.attn_1.proj_out.weight', 'decoder.mid.attn_1.proj_out.bias', 'decoder.mid.block_2.norm1.weight', 'decoder.mid.block_2.norm1.bias', 'decoder.mid.block_2.conv1.weight', 'decoder.mid.block_2.conv1.bias', 'decoder.mid.block_2.norm2.weight', 'decoder.mid.block_2.norm2.bias', 'decoder.mid.block_2.conv2.weight', 'decoder.mid.block_2.conv2.bias', 'decoder.up.0.block.0.norm1.weight', 'decoder.up.0.block.0.norm1.bias', 'decoder.up.0.block.0.conv1.weight', 'decoder.up.0.block.0.conv1.bias', 'decoder.up.0.block.0.norm2.weight', 'decoder.up.0.block.0.norm2.bias', 'decoder.up.0.block.0.conv2.weight', 'decoder.up.0.block.0.conv2.bias', 'decoder.up.0.block.0.nin_shortcut.weight', 'decoder.up.0.block.0.nin_shortcut.bias', 'decoder.up.0.block.1.norm1.weight', 'decoder.up.0.block.1.norm1.bias', 'decoder.up.0.block.1.conv1.weight', 'decoder.up.0.block.1.conv1.bias', 'decoder.up.0.block.1.norm2.weight', 'decoder.up.0.block.1.norm2.bias', 'decoder.up.0.block.1.conv2.weight', 'decoder.up.0.block.1.conv2.bias', 'decoder.up.0.block.2.norm1.weight', 'decoder.up.0.block.2.norm1.bias', 'decoder.up.0.block.2.conv1.weight', 'decoder.up.0.block.2.conv1.bias', 'decoder.up.0.block.2.norm2.weight', 'decoder.up.0.block.2.norm2.bias', 'decoder.up.0.block.2.conv2.weight', 'decoder.up.0.block.2.conv2.bias', 'decoder.up.1.block.0.norm1.weight', 'decoder.up.1.block.0.norm1.bias', 'decoder.up.1.block.0.conv1.weight', 'decoder.up.1.block.0.conv1.bias', 'decoder.up.1.block.0.norm2.weight', 'decoder.up.1.block.0.norm2.bias', 'decoder.up.1.block.0.conv2.weight', 'decoder.up.1.block.0.conv2.bias', 'decoder.up.1.block.0.nin_shortcut.weight', 'decoder.up.1.block.0.nin_shortcut.bias', 'decoder.up.1.block.1.norm1.weight', 'decoder.up.1.block.1.norm1.bias', 'decoder.up.1.block.1.conv1.weight', 'decoder.up.1.block.1.conv1.bias', 'decoder.up.1.block.1.norm2.weight', 'decoder.up.1.block.1.norm2.bias', 'decoder.up.1.block.1.conv2.weight', 'decoder.up.1.block.1.conv2.bias', 'decoder.up.1.block.2.norm1.weight', 'decoder.up.1.block.2.norm1.bias', 'decoder.up.1.block.2.conv1.weight', 'decoder.up.1.block.2.conv1.bias', 'decoder.up.1.block.2.norm2.weight', 'decoder.up.1.block.2.norm2.bias', 'decoder.up.1.block.2.conv2.weight', 'decoder.up.1.block.2.conv2.bias', 'decoder.up.1.upsample.conv.weight', 'decoder.up.1.upsample.conv.bias', 'decoder.up.2.block.0.norm1.weight', 'decoder.up.2.block.0.norm1.bias', 'decoder.up.2.block.0.conv1.weight', 'decoder.up.2.block.0.conv1.bias', 'decoder.up.2.block.0.norm2.weight', 'decoder.up.2.block.0.norm2.bias', 'decoder.up.2.block.0.conv2.weight', 'decoder.up.2.block.0.conv2.bias', 'decoder.up.2.block.1.norm1.weight', 'decoder.up.2.block.1.norm1.bias', 'decoder.up.2.block.1.conv1.weight', 'decoder.up.2.block.1.conv1.bias', 'decoder.up.2.block.1.norm2.weight', 'decoder.up.2.block.1.norm2.bias', 'decoder.up.2.block.1.conv2.weight', 'decoder.up.2.block.1.conv2.bias', 'decoder.up.2.block.2.norm1.weight', 'decoder.up.2.block.2.norm1.bias', 'decoder.up.2.block.2.conv1.weight', 'decoder.up.2.block.2.conv1.bias', 'decoder.up.2.block.2.norm2.weight', 'decoder.up.2.block.2.norm2.bias', 'decoder.up.2.block.2.conv2.weight', 'decoder.up.2.block.2.conv2.bias', 'decoder.up.2.upsample.conv.weight', 'decoder.up.2.upsample.conv.bias', 'decoder.up.3.block.0.norm1.weight', 'decoder.up.3.block.0.norm1.bias', 'decoder.up.3.block.0.conv1.weight', 'decoder.up.3.block.0.conv1.bias', 'decoder.up.3.block.0.norm2.weight', 'decoder.up.3.block.0.norm2.bias', 'decoder.up.3.block.0.conv2.weight', 'decoder.up.3.block.0.conv2.bias', 'decoder.up.3.block.1.norm1.weight', 'decoder.up.3.block.1.norm1.bias', 'decoder.up.3.block.1.conv1.weight', 'decoder.up.3.block.1.conv1.bias', 'decoder.up.3.block.1.norm2.weight', 'decoder.up.3.block.1.norm2.bias', 'decoder.up.3.block.1.conv2.weight', 'decoder.up.3.block.1.conv2.bias', 'decoder.up.3.block.2.norm1.weight', 'decoder.up.3.block.2.norm1.bias', 'decoder.up.3.block.2.conv1.weight', 'decoder.up.3.block.2.conv1.bias', 'decoder.up.3.block.2.norm2.weight', 'decoder.up.3.block.2.norm2.bias', 'decoder.up.3.block.2.conv2.weight', 'decoder.up.3.block.2.conv2.bias', 'decoder.up.3.upsample.conv.weight', 'decoder.up.3.upsample.conv.bias', 'decoder.norm_out.weight', 'decoder.norm_out.bias'], unexpected_keys=['encoder.conv_norm_out.bias', 'encoder.conv_norm_out.weight', 'encoder.down_blocks.0.downsamplers.0.conv.bias', 'encoder.down_blocks.0.downsamplers.0.conv.weight', 'encoder.down_blocks.0.resnets.0.conv1.bias', 'encoder.down_blocks.0.resnets.0.conv1.weight', 'encoder.down_blocks.0.resnets.0.conv2.bias', 'encoder.down_blocks.0.resnets.0.conv2.weight', 'encoder.down_blocks.0.resnets.0.norm1.bias', 'encoder.down_blocks.0.resnets.0.norm1.weight', 'encoder.down_blocks.0.resnets.0.norm2.bias', 'encoder.down_blocks.0.resnets.0.norm2.weight', 'encoder.down_blocks.0.resnets.1.conv1.bias', 'encoder.down_blocks.0.resnets.1.conv1.weight', 'encoder.down_blocks.0.resnets.1.conv2.bias', 'encoder.down_blocks.0.resnets.1.conv2.weight', 'encoder.down_blocks.0.resnets.1.norm1.bias', 'encoder.down_blocks.0.resnets.1.norm1.weight', 'encoder.down_blocks.0.resnets.1.norm2.bias', 'encoder.down_blocks.0.resnets.1.norm2.weight', 'encoder.down_blocks.1.downsamplers.0.conv.bias', 'encoder.down_blocks.1.downsamplers.0.conv.weight', 'encoder.down_blocks.1.resnets.0.conv1.bias', 'encoder.down_blocks.1.resnets.0.conv1.weight', 'encoder.down_blocks.1.resnets.0.conv2.bias', 'encoder.down_blocks.1.resnets.0.conv2.weight', 'encoder.down_blocks.1.resnets.0.conv_shortcut.bias', 'encoder.down_blocks.1.resnets.0.conv_shortcut.weight', 'encoder.down_blocks.1.resnets.0.norm1.bias', 'encoder.down_blocks.1.resnets.0.norm1.weight', 'encoder.down_blocks.1.resnets.0.norm2.bias', 'encoder.down_blocks.1.resnets.0.norm2.weight', 'encoder.down_blocks.1.resnets.1.conv1.bias', 'encoder.down_blocks.1.resnets.1.conv1.weight', 'encoder.down_blocks.1.resnets.1.conv2.bias', 'encoder.down_blocks.1.resnets.1.conv2.weight', 'encoder.down_blocks.1.resnets.1.norm1.bias', 'encoder.down_blocks.1.resnets.1.norm1.weight', 'encoder.down_blocks.1.resnets.1.norm2.bias', 'encoder.down_blocks.1.resnets.1.norm2.weight', 'encoder.down_blocks.2.downsamplers.0.conv.bias', 'encoder.down_blocks.2.downsamplers.0.conv.weight', 'encoder.down_blocks.2.resnets.0.conv1.bias', 'encoder.down_blocks.2.resnets.0.conv1.weight', 'encoder.down_blocks.2.resnets.0.conv2.bias', 'encoder.down_blocks.2.resnets.0.conv2.weight', 'encoder.down_blocks.2.resnets.0.conv_shortcut.bias', 'encoder.down_blocks.2.resnets.0.conv_shortcut.weight', 'encoder.down_blocks.2.resnets.0.norm1.bias', 'encoder.down_blocks.2.resnets.0.norm1.weight', 'encoder.down_blocks.2.resnets.0.norm2.bias', 'encoder.down_blocks.2.resnets.0.norm2.weight', 'encoder.down_blocks.2.resnets.1.conv1.bias', 'encoder.down_blocks.2.resnets.1.conv1.weight', 'encoder.down_blocks.2.resnets.1.conv2.bias', 'encoder.down_blocks.2.resnets.1.conv2.weight', 'encoder.down_blocks.2.resnets.1.norm1.bias', 'encoder.down_blocks.2.resnets.1.norm1.weight', 'encoder.down_blocks.2.resnets.1.norm2.bias', 'encoder.down_blocks.2.resnets.1.norm2.weight', 'encoder.down_blocks.3.resnets.0.conv1.bias', 'encoder.down_blocks.3.resnets.0.conv1.weight', 'encoder.down_blocks.3.resnets.0.conv2.bias', 'encoder.down_blocks.3.resnets.0.conv2.weight', 'encoder.down_blocks.3.resnets.0.norm1.bias', 'encoder.down_blocks.3.resnets.0.norm1.weight', 'encoder.down_blocks.3.resnets.0.norm2.bias', 'encoder.down_blocks.3.resnets.0.norm2.weight', 'encoder.down_blocks.3.resnets.1.conv1.bias', 'encoder.down_blocks.3.resnets.1.conv1.weight', 'encoder.down_blocks.3.resnets.1.conv2.bias', 'encoder.down_blocks.3.resnets.1.conv2.weight', 'encoder.down_blocks.3.resnets.1.norm1.bias', 'encoder.down_blocks.3.resnets.1.norm1.weight', 'encoder.down_blocks.3.resnets.1.norm2.bias', 'encoder.down_blocks.3.resnets.1.norm2.weight', 'encoder.mid_block.attentions.0.group_norm.bias', 'encoder.mid_block.attentions.0.group_norm.weight', 'encoder.mid_block.attentions.0.to_k.bias', 'encoder.mid_block.attentions.0.to_k.weight', 'encoder.mid_block.attentions.0.to_out.0.bias', 'encoder.mid_block.attentions.0.to_out.0.weight', 'encoder.mid_block.attentions.0.to_q.bias', 'encoder.mid_block.attentions.0.to_q.weight', 'encoder.mid_block.attentions.0.to_v.bias', 'encoder.mid_block.attentions.0.to_v.weight', 'encoder.mid_block.resnets.0.conv1.bias', 'encoder.mid_block.resnets.0.conv1.weight', 'encoder.mid_block.resnets.0.conv2.bias', 'encoder.mid_block.resnets.0.conv2.weight', 'encoder.mid_block.resnets.0.norm1.bias', 'encoder.mid_block.resnets.0.norm1.weight', 'encoder.mid_block.resnets.0.norm2.bias', 'encoder.mid_block.resnets.0.norm2.weight', 'encoder.mid_block.resnets.1.conv1.bias', 'encoder.mid_block.resnets.1.conv1.weight', 'encoder.mid_block.resnets.1.conv2.bias', 'encoder.mid_block.resnets.1.conv2.weight', 'encoder.mid_block.resnets.1.norm1.bias', 'encoder.mid_block.resnets.1.norm1.weight', 'encoder.mid_block.resnets.1.norm2.bias', 'encoder.mid_block.resnets.1.norm2.weight', 'decoder.conv_norm_out.bias', 'decoder.conv_norm_out.weight', 'decoder.mid_block.attentions.0.group_norm.bias', 'decoder.mid_block.attentions.0.group_norm.weight', 'decoder.mid_block.attentions.0.to_k.bias', 'decoder.mid_block.attentions.0.to_k.weight', 'decoder.mid_block.attentions.0.to_out.0.bias', 'decoder.mid_block.attentions.0.to_out.0.weight', 'decoder.mid_block.attentions.0.to_q.bias', 'decoder.mid_block.attentions.0.to_q.weight', 'decoder.mid_block.attentions.0.to_v.bias', 'decoder.mid_block.attentions.0.to_v.weight', 'decoder.mid_block.resnets.0.conv1.bias', 'decoder.mid_block.resnets.0.conv1.weight', 'decoder.mid_block.resnets.0.conv2.bias', 'decoder.mid_block.resnets.0.conv2.weight', 'decoder.mid_block.resnets.0.norm1.bias', 'decoder.mid_block.resnets.0.norm1.weight', 'decoder.mid_block.resnets.0.norm2.bias', 'decoder.mid_block.resnets.0.norm2.weight', 'decoder.mid_block.resnets.1.conv1.bias', 'decoder.mid_block.resnets.1.conv1.weight', 'decoder.mid_block.resnets.1.conv2.bias', 'decoder.mid_block.resnets.1.conv2.weight', 'decoder.mid_block.resnets.1.norm1.bias', 'decoder.mid_block.resnets.1.norm1.weight', 'decoder.mid_block.resnets.1.norm2.bias', 'decoder.mid_block.resnets.1.norm2.weight', 'decoder.up_blocks.0.resnets.0.conv1.bias', 'decoder.up_blocks.0.resnets.0.conv1.weight', 'decoder.up_blocks.0.resnets.0.conv2.bias', 'decoder.up_blocks.0.resnets.0.conv2.weight', 'decoder.up_blocks.0.resnets.0.norm1.bias', 'decoder.up_blocks.0.resnets.0.norm1.weight', 'decoder.up_blocks.0.resnets.0.norm2.bias', 'decoder.up_blocks.0.resnets.0.norm2.weight', 'decoder.up_blocks.0.resnets.1.conv1.bias', 'decoder.up_blocks.0.resnets.1.conv1.weight', 'decoder.up_blocks.0.resnets.1.conv2.bias', 'decoder.up_blocks.0.resnets.1.conv2.weight', 'decoder.up_blocks.0.resnets.1.norm1.bias', 'decoder.up_blocks.0.resnets.1.norm1.weight', 'decoder.up_blocks.0.resnets.1.norm2.bias', 'decoder.up_blocks.0.resnets.1.norm2.weight', 'decoder.up_blocks.0.resnets.2.conv1.bias', 'decoder.up_blocks.0.resnets.2.conv1.weight', 'decoder.up_blocks.0.resnets.2.conv2.bias', 'decoder.up_blocks.0.resnets.2.conv2.weight', 'decoder.up_blocks.0.resnets.2.norm1.bias', 'decoder.up_blocks.0.resnets.2.norm1.weight', 'decoder.up_blocks.0.resnets.2.norm2.bias', 'decoder.up_blocks.0.resnets.2.norm2.weight', 'decoder.up_blocks.0.upsamplers.0.conv.bias', 'decoder.up_blocks.0.upsamplers.0.conv.weight', 'decoder.up_blocks.1.resnets.0.conv1.bias', 'decoder.up_blocks.1.resnets.0.conv1.weight', 'decoder.up_blocks.1.resnets.0.conv2.bias', 'decoder.up_blocks.1.resnets.0.conv2.weight', 'decoder.up_blocks.1.resnets.0.norm1.bias', 'decoder.up_blocks.1.resnets.0.norm1.weight', 'decoder.up_blocks.1.resnets.0.norm2.bias', 'decoder.up_blocks.1.resnets.0.norm2.weight', 'decoder.up_blocks.1.resnets.1.conv1.bias', 'decoder.up_blocks.1.resnets.1.conv1.weight', 'decoder.up_blocks.1.resnets.1.conv2.bias', 'decoder.up_blocks.1.resnets.1.conv2.weight', 'decoder.up_blocks.1.resnets.1.norm1.bias', 'decoder.up_blocks.1.resnets.1.norm1.weight', 'decoder.up_blocks.1.resnets.1.norm2.bias', 'decoder.up_blocks.1.resnets.1.norm2.weight', 'decoder.up_blocks.1.resnets.2.conv1.bias', 'decoder.up_blocks.1.resnets.2.conv1.weight', 'decoder.up_blocks.1.resnets.2.conv2.bias', 'decoder.up_blocks.1.resnets.2.conv2.weight', 'decoder.up_blocks.1.resnets.2.norm1.bias', 'decoder.up_blocks.1.resnets.2.norm1.weight', 'decoder.up_blocks.1.resnets.2.norm2.bias', 'decoder.up_blocks.1.resnets.2.norm2.weight', 'decoder.up_blocks.1.upsamplers.0.conv.bias', 'decoder.up_blocks.1.upsamplers.0.conv.weight', 'decoder.up_blocks.2.resnets.0.conv1.bias', 'decoder.up_blocks.2.resnets.0.conv1.weight', 'decoder.up_blocks.2.resnets.0.conv2.bias', 'decoder.up_blocks.2.resnets.0.conv2.weight', 'decoder.up_blocks.2.resnets.0.conv_shortcut.bias', 'decoder.up_blocks.2.resnets.0.conv_shortcut.weight', 'decoder.up_blocks.2.resnets.0.norm1.bias', 'decoder.up_blocks.2.resnets.0.norm1.weight', 'decoder.up_blocks.2.resnets.0.norm2.bias', 'decoder.up_blocks.2.resnets.0.norm2.weight', 'decoder.up_blocks.2.resnets.1.conv1.bias', 'decoder.up_blocks.2.resnets.1.conv1.weight', 'decoder.up_blocks.2.resnets.1.conv2.bias', 'decoder.up_blocks.2.resnets.1.conv2.weight', 'decoder.up_blocks.2.resnets.1.norm1.bias', 'decoder.up_blocks.2.resnets.1.norm1.weight', 'decoder.up_blocks.2.resnets.1.norm2.bias', 'decoder.up_blocks.2.resnets.1.norm2.weight', 'decoder.up_blocks.2.resnets.2.conv1.bias', 'decoder.up_blocks.2.resnets.2.conv1.weight', 'decoder.up_blocks.2.resnets.2.conv2.bias', 'decoder.up_blocks.2.resnets.2.conv2.weight', 'decoder.up_blocks.2.resnets.2.norm1.bias', 'decoder.up_blocks.2.resnets.2.norm1.weight', 'decoder.up_blocks.2.resnets.2.norm2.bias', 'decoder.up_blocks.2.resnets.2.norm2.weight', 'decoder.up_blocks.2.upsamplers.0.conv.bias', 'decoder.up_blocks.2.upsamplers.0.conv.weight', 'decoder.up_blocks.3.resnets.0.conv1.bias', 'decoder.up_blocks.3.resnets.0.conv1.weight', 'decoder.up_blocks.3.resnets.0.conv2.bias', 'decoder.up_blocks.3.resnets.0.conv2.weight', 'decoder.up_blocks.3.resnets.0.conv_shortcut.bias', 'decoder.up_blocks.3.resnets.0.conv_shortcut.weight', 'decoder.up_blocks.3.resnets.0.norm1.bias', 'decoder.up_blocks.3.resnets.0.norm1.weight', 'decoder.up_blocks.3.resnets.0.norm2.bias', 'decoder.up_blocks.3.resnets.0.norm2.weight', 'decoder.up_blocks.3.resnets.1.conv1.bias', 'decoder.up_blocks.3.resnets.1.conv1.weight', 'decoder.up_blocks.3.resnets.1.conv2.bias', 'decoder.up_blocks.3.resnets.1.conv2.weight', 'decoder.up_blocks.3.resnets.1.norm1.bias', 'decoder.up_blocks.3.resnets.1.norm1.weight', 'decoder.up_blocks.3.resnets.1.norm2.bias', 'decoder.up_blocks.3.resnets.1.norm2.weight', 'decoder.up_blocks.3.resnets.2.conv1.bias', 'decoder.up_blocks.3.resnets.2.conv1.weight', 'decoder.up_blocks.3.resnets.2.conv2.bias', 'decoder.up_blocks.3.resnets.2.conv2.weight', 'decoder.up_blocks.3.resnets.2.norm1.bias', 'decoder.up_blocks.3.resnets.2.norm1.weight', 'decoder.up_blocks.3.resnets.2.norm2.bias', 'decoder.up_blocks.3.resnets.2.norm2.weight']) import network module: .networks.lora_flux ERROR !!! Exception during processing !!! Cannot copy out of meta tensor; no execution.py:386 data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device. ERROR Traceback (most recent call last): execution.py:387 File "I:\ComfyUI\ComfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "I:\ComfyUI\ComfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "I:\ComfyUI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "I:\ComfyUI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) File "I:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\nodes.py", line 430, in init_training training_loop = network_trainer.init_train(args) File "I:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-FluxTrainer\train_network.py", line 380, in init_train vae.to(accelerator.device, dtype=vae_dtype) File "I:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1174, in to return self._apply(convert) File "I:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 780, in _apply module._apply(fn) File "I:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 780, in _apply module._apply(fn) File "I:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 780, in _apply module._apply(fn) [Previous line repeated 3 more times] File "I:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 805, in _apply param_applied = fn(param) File "I:\ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1167, in convert raise NotImplementedError( NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.

                INFO     Prompt executed in 103.97 seconds                                               main.py:138
Lecho303 commented 1 month ago

same error,are u fixed it?

kotaxyz commented 1 month ago

same error,are u fixed it?

no

kijai commented 1 month ago

Your model seems to be in wrong folder? I:\ComfyUI\ComfyUI\models\vae\FLUX.1-dev.safetensors Or what is that file, for the VAE it is supposed to only be the original ae.safetensors

kotaxyz commented 1 month ago

Your model seems to be in wrong folder? I:\ComfyUI\ComfyUI\models\vae\FLUX.1-dev.safetensors Or what is that file, for the VAE it is supposed to only be the original ae.safetensors

its ae.safetensor i just renamed it to flux.1-dev i use it in my workflows and it works fine

kijai commented 1 month ago

Your model seems to be in wrong folder? I:\ComfyUI\ComfyUI\models\vae\FLUX.1-dev.safetensors Or what is that file, for the VAE it is supposed to only be the original ae.safetensors

its ae.safetensor i just renamed it to flux.1-dev i use it in my workflows and it works fine

The error log suggests it's the diffusers VAE though, and kohya can't load that.

kotaxyz commented 1 month ago

oh my bad , it seems i was using the wrong file i was using the vae in FLUX.1-dev branch instead of the ae file , i will test with it now, thanks for your reply

kotaxyz commented 1 month ago

same error,are u fixed it?

fixed , just download ae.safetensors from here https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

Lecho303 commented 1 month ago

同样的错误,你修复了吗?

已修复,只需从这里下载 ae.safetensors https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

hi there,i try to download this ae.but it still get this error,and my training data is on the desktop,the ae. is put in the documentE:\stable-diffusion-webui\models\VAE(the comfyui is in the D:\,i shared the SD path to comfyui) is this will raise error? ComfyUI Error Report

Error Details

## System Information
- **ComfyUI Version:** v0.2.0-2-g00a5d08
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** nt
- **Python Version:** 3.11.8 (tags/v3.11.8:db85d51, Feb  6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.3.1+cu121
## Devices

- **Name:** cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : cudaMallocAsync
  - **Type:** cuda
  - **VRAM Total:** 6441926656
  - **VRAM Free:** 5358223360
  - **Torch VRAM Total:** 33554432
  - **Torch VRAM Free:** 33554432

## Logs

2024-09-05 18:59:21,770 - root - INFO - Total VRAM 6144 MB, total RAM 32461 MB 2024-09-05 18:59:21,771 - root - INFO - pytorch version: 2.3.1+cu121 2024-09-05 18:59:21,771 - root - INFO - Set vram state to: NORMAL_VRAM 2024-09-05 18:59:21,771 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : cudaMallocAsync 2024-09-05 18:59:22,875 - root - INFO - Using pytorch cross attention 2024-09-05 18:59:25,297 - root - INFO - [Prompt Server] web root: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\web 2024-09-05 18:59:25,309 - root - INFO - Adding extra search path checkpoints E:\stable-diffusion-webui\models/Stable-diffusion 2024-09-05 18:59:25,310 - root - INFO - Adding extra search path configs E:\stable-diffusion-webui\models/Stable-diffusion 2024-09-05 18:59:25,310 - root - INFO - Adding extra search path vae E:\stable-diffusion-webui\models/VAE 2024-09-05 18:59:25,310 - root - INFO - Adding extra search path loras E:\stable-diffusion-webui\models/Lora 2024-09-05 18:59:25,310 - root - INFO - Adding extra search path loras E:\stable-diffusion-webui\models/LyCORIS 2024-09-05 18:59:25,310 - root - INFO - Adding extra search path upscale_models E:\stable-diffusion-webui\models/ESRGAN 2024-09-05 18:59:25,310 - root - INFO - Adding extra search path upscale_models E:\stable-diffusion-webui\models/RealESRGAN 2024-09-05 18:59:25,310 - root - INFO - Adding extra search path upscale_models E:\stable-diffusion-webui\models/SwinIR 2024-09-05 18:59:25,310 - root - INFO - Adding extra search path embeddings E:\stable-diffusion-webui\embeddings 2024-09-05 18:59:25,310 - root - INFO - Adding extra search path hypernetworks E:\stable-diffusion-webui\models/hypernetworks 2024-09-05 18:59:25,311 - root - INFO - Adding extra search path controlnet E:\stable-diffusion-webui\models/ControlNet 2024-09-05 18:59:27,288 - root - INFO - Total VRAM 6144 MB, total RAM 32461 MB 2024-09-05 18:59:27,289 - root - INFO - pytorch version: 2.3.1+cu121 2024-09-05 18:59:27,289 - root - INFO - Set vram state to: NORMAL_VRAM 2024-09-05 18:59:27,289 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : cudaMallocAsync 2024-09-05 18:59:28,207 - root - WARNING - Traceback (most recent call last): File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\nodes.py", line 1993, in load_custom_node module_spec.loader.exec_module(module) File "", line 940, in exec_module File "", line 241, in _call_with_frames_removed File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\CosyVoice-ComfyUI__init__.py", line 1, in from .nodes import TextNode, CosyVoiceNode, LoadSRT, CosyVoiceDubbingNode File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\CosyVoice-ComfyUI\nodes.py", line 18, in import audiosegment ModuleNotFoundError: No module named 'audiosegment'

2024-09-05 18:59:28,207 - root - WARNING - Cannot import D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\CosyVoice-ComfyUI module for custom nodes: No module named 'audiosegment' 2024-09-05 18:59:28,223 - root - WARNING - Traceback (most recent call last): File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\nodes.py", line 1993, in load_custom_node module_spec.loader.exec_module(module) File "", line 940, in exec_module File "", line 241, in _call_with_frames_removed File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\CosyVoice-ComfyUI-main__init__.py", line 1, in from .nodes import TextNode, CosyVoiceNode, LoadSRT, CosyVoiceDubbingNode File "D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\CosyVoice-ComfyUI-main\nodes.py", line 18, in import audiosegment ModuleNotFoundError: No module named 'audiosegment'

2024-09-05 18:59:28,223 - root - WARNING - Cannot import D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\CosyVoice-ComfyUI-main module for custom nodes: No module named 'audiosegment' 2024-09-05 18:59:28,262 - root - INFO - Import times for custom nodes: 2024-09-05 18:59:28,262 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\websocket_image_save.py 2024-09-05 18:59:28,262 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\AIGODLIKE-COMFYUI-TRANSLATION 2024-09-05 18:59:28,264 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\Lora-Training-in-Comfy 2024-09-05 18:59:28,264 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\Image-Captioning-in-ComfyUI 2024-09-05 18:59:28,264 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI-WD14-Tagger 2024-09-05 18:59:28,264 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI-GGUF 2024-09-05 18:59:28,264 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI-Jjk-Nodes 2024-09-05 18:59:28,265 - root - INFO - 0.0 seconds (IMPORT FAILED): D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\CosyVoice-ComfyUI-main 2024-09-05 18:59:28,265 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\StyleShot-ComfyUI 2024-09-05 18:59:28,265 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI-Frame-Interpolation 2024-09-05 18:59:28,265 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\rgthree-comfy 2024-09-05 18:59:28,265 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\comfyui-workspace-manager-main 2024-09-05 18:59:28,265 - root - INFO - 0.0 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI-KJNodes 2024-09-05 18:59:28,265 - root - INFO - 0.1 seconds (IMPORT FAILED): D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\CosyVoice-ComfyUI 2024-09-05 18:59:28,265 - root - INFO - 0.3 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI-Manager 2024-09-05 18:59:28,265 - root - INFO - 0.5 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite 2024-09-05 18:59:28,265 - root - INFO - 0.9 seconds: D:\comfyUI\ComfyUI_windows_portable_nvidia.7z\ComfyUI\custom_nodes\ComfyUI-FluxTrainer 2024-09-05 18:59:28,265 - root - INFO - 2024-09-05 18:59:28,271 - root - INFO - Starting server

2024-09-05 18:59:28,272 - root - INFO - To see the GUI go to: http://127.0.0.1:8188 2024-09-05 19:01:36,951 - root - INFO - got prompt 2024-09-05 19:01:36,959 - root - ERROR - Failed to validate prompt for output 130: 2024-09-05 19:01:36,959 - root - ERROR - AddLabel 78: 2024-09-05 19:01:36,960 - root - ERROR - - Required input is missing: image 2024-09-05 19:01:36,960 - root - ERROR - AddLabel 80: 2024-09-05 19:01:36,960 - root - ERROR - - Required input is missing: image 2024-09-05 19:01:36,960 - root - ERROR - Output will be ignored

## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

Workflow too large. Please manually upload the workflow from local file system.