epitaque / dreambooth_depth2img

adaptation of huggingface's dreambooth training script to support depth2img
MIT License
100 stars 15 forks source link

Producing Models with broken UNET, vae, CLIP #11

Open GreenTeaBD opened 1 year ago

GreenTeaBD commented 1 year ago

This mattered less before but, recent commits of automatic1111 web-ui refuse to work with my depth models at least trained with the dreambooth depth model script because they have a broken unet, CLIP, and vae. See this issue for the webui.

non-depthmodels trained with the ShivamShrirao repo don't seem to have this problem.

Running the models through the Model Toolkit extension in the webui outputs this; Architecture

UNET-v2-Depth-BROKEN
    UNET-v2-Depth
        UNET-v2-Depth

Additional

VAE-v1-BROKEN
    VAE-v1
        VAE-v1-SD

Rejected

UNET-v1-SD: Missing required keys (65 of 686)
    model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight (1280, 768)
    model.diffusion_model.input_blocks.0.0.weight (320, 4, 3, 3)
    model.diffusion_model.output_blocks.3.1.proj_out.weight (1280, 1280, 1, 1)
    model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight (1280, 768)
    model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight (640, 768)
    ...
UNET-v1-Inpainting: Missing required keys (65 of 686)
    model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight (1280, 768)
    model.diffusion_model.output_blocks.3.1.proj_out.weight (1280, 1280, 1, 1)
    model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight (1280, 768)
    model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight (640, 768)
    model.diffusion_model.input_blocks.2.1.proj_out.weight (320, 320, 1, 1)
    ...
UNET-v1-Pix2Pix: Missing required keys (65 of 686)
    model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight (1280, 768)
    model.diffusion_model.output_blocks.3.1.proj_out.weight (1280, 1280, 1, 1)
    model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight (1280, 768)
    model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight (640, 768)
    model.diffusion_model.input_blocks.2.1.proj_out.weight (320, 320, 1, 1)
    ...
UNET-v2-SD: Missing required keys (1 of 686)
    model.diffusion_model.input_blocks.0.0.weight (320, 4, 3, 3)
CLIP-v1-SD: Missing required keys (196 of 197)
    cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm2.weight (768,)
    cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm2.bias (768,)
    cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm2.bias (768,)
    cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias (768,)
    cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight (768, 768)
    ...
SD-v1: Missing required classes
    CLIP-v1
    UNET-v1
SD-v1-Pix2Pix: Missing required classes
    CLIP-v1
    UNET-v1-Pix2Pix
SD-v2: Missing required classes
    CLIP-v2
    UNET-v2
SD-v2-Depth: Missing required classes
    CLIP-v2
    Depth-v2

Unknown

cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.12.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.12.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.12.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.12.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.12.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.12.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.12.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.12.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.12.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.12.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.12.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.12.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.12.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.12.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.12.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.12.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.13.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.13.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.13.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.13.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.13.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.13.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.13.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.13.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.13.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.13.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.13.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.13.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.13.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.13.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.13.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.13.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.14.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.14.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.14.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.14.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.14.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.14.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.14.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.14.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.14.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.14.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.14.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.14.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.14.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.14.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.14.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.14.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.15.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.15.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.15.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.15.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.15.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.15.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.15.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.15.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.15.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.15.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.15.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.15.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.15.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.15.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.15.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.15.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.16.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.16.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.16.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.16.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.16.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.16.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.16.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.16.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.16.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.16.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.16.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.16.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.16.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.16.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.16.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.16.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.17.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.17.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.17.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.17.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.17.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.17.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.17.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.17.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.17.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.17.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.17.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.17.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.17.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.17.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.17.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.17.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.18.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.18.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.18.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.18.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.18.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.18.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.18.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.18.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.18.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.18.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.18.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.18.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.18.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.18.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.18.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.18.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.19.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.19.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.19.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.19.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.19.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.19.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.19.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.19.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.19.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.19.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.19.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.19.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.19.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.19.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.19.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.19.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.20.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.20.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.20.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.20.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.20.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.20.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.20.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.20.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.20.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.20.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.20.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.20.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.20.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.20.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.20.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.20.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.21.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.21.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.21.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.21.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.21.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.21.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.21.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.21.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.21.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.21.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.21.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.21.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.21.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.21.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.21.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.21.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.22.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.22.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.22.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.22.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.22.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.22.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.22.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.22.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.22.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.22.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.22.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.22.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.22.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.22.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.22.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.22.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm1.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm1.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm2.weight (1024,)
cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc1.bias (4096,)
cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc1.weight (4096, 1024)
cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc2.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc2.weight (1024, 4096)
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias (1024,)
cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight (1024, 1024)
cond_stage_model.transformer.text_model.final_layer_norm.bias (1024,)
cond_stage_model.transformer.text_model.final_layer_norm.weight (1024,)
...

Using the models gives a "modules.devices.NansException: A tensor with all NaNs was produced in Unet" error Disabling the nan check makes it "work" but just outputs a black image.

I trained a lot of depth models that error out in this way, generally the training script looked like this export MODEL_NAME="stabilityai/stable-diffusion-2-depth" export INSTANCE_DIR="training/skscodysmall" export CLASS_DIR="classes/man_unsplash" export OUTPUT_DIR="model"   accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --pretrained_txt2img_model_name_or_path="stabilityai/stable-diffusion-2-1-base" \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="skscody" \ --class_prompt="man" \ --seed=1337 \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=1 --gradient_checkpointing \ --learning_rate=5e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --num_class_images=400 \ --sample_batch_size=1 \ --max_train_steps=3000 \

accelerate config is; compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: false zero_stage: 2 distributed_type: DEEPSPEED downcast_bf16: 'no' dynamo_backend: 'NO' fsdp_config: {} machine_rank: 0 main_training_function: main megatron_lm_config: {} mixed_precision: 'no' num_machines: 1 num_processes: 1 rdzv_backend: static same_network: true use_cpu: false

bartman081523 commented 1 year ago

Do you have the ability to test the same with cudatoolkit 10.2 with a venv created from conda or micromamba with conda create -p ./venv python==3.9 cudatoolkit==10.2 -c conda-forge -y && ./venv/bin/python launch.py?