n00mkrad / text2image-gui

Somewhat modular text2image GUI, initially just for Stable Diffusion
GNU General Public License v3.0
948 stars 97 forks source link

DreamBooth 4090 not work #65

Open ffdown opened 1 year ago

ffdown commented 1 year ago

Rhyzen 5 3600, RTX4090, 48GB RAM, 6tb hdd, 2tb ssd...

All mode boken.

Maximum mode Not enough SWAP

Medium mode

Unhandled Thread Exception!
Not enough memory.
Stack Trace:
   в System.Drawing.TextureBrush..ctor(Image image, WrapMode wrapMode)
   в System.Windows.Forms.ControlPaint.DrawBackgroundImage(Graphics g, Image backgroundImage, Color backColor, ImageLayout backgroundImageLayout, Rectangle bounds, Rectangle clipRect, Point scrollOffset, RightToLeft rightToLeft)
   в System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle, Color backColor, Point scrollOffset)
   в System.Windows.Forms.Control.PaintBackground(PaintEventArgs e, Rectangle rectangle)
   в System.Windows.Forms.Control.OnPaintBackground(PaintEventArgs pevent)
   в System.Windows.Forms.Control.PaintWithErrorHandling(PaintEventArgs e, Int16 layer)
   в System.Windows.Forms.Control.WmPaint(Message& m)
   в System.Windows.Forms.Control.WndProc(Message& m)
   в System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

Low mode CUDNN ERROR

ffdown commented 1 year ago
[00000154] [01-24-2023 23:13:59] Global seed set to 23
[00000155] [01-24-2023 23:14:37] Traceback (most recent call last):
[00000156] [01-24-2023 23:14:37]   File "E:\SDGUI-1.9.0\Data\repo\db\main.py", line 644, in <module>
[00000157] [01-24-2023 23:14:37] Running on GPUs 0,
[00000158] [01-24-2023 23:14:37] Loading model from E:/AI/SDPORT/models/Stable-diffusion/deliberate_v11.ckpt
[00000159] [01-24-2023 23:14:37]     model = load_model_from_config(config, opt.actual_resume)
[00000160] [01-24-2023 23:14:37]   File "E:\SDGUI-1.9.0\Data\repo\db\main.py", line 28, in load_model_from_config
[00000161] [01-24-2023 23:14:37]     sd = pl_sd["state_dict"]
[00000162] [01-24-2023 23:14:37] KeyError: 'state_dict'
[00000165] [01-24-2023 23:14:47] During handling of the above exception, another exception occurred:
[00000166] [01-24-2023 23:14:47] Traceback (most recent call last):
[00000167] [01-24-2023 23:14:47]   File "E:\SDGUI-1.9.0\Data\repo\db\main.py", line 859, in <module>
[00000168] [01-24-2023 23:14:47]     if trainer.global_rank == 0:
[00000169] [01-24-2023 23:14:47] NameError: name 'trainer' is not defined. Did you mean: 'Trainer'?
[00000174] [01-24-2023 23:15:23] Global seed set to 23
[00000175] [01-24-2023 23:16:05] Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'text_projection.weight', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.layer_norm2.weight', 'visual_projection.weight', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'logit_scale', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.bias']
[00000176] [01-24-2023 23:16:05] - This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
[00000177] [01-24-2023 23:16:05] - This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[00000178] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loggers\test_tube.py:105: LightningDeprecationWarning: The TestTubeLogger is deprecated since v1.5 and will be removed in v1.7. We recommend switching to the `pytorch_lightning.loggers.TensorBoardLogger` as an alternative.
[00000179] [01-24-2023 23:16:10]   rank_zero_deprecation(
[00000180] [01-24-2023 23:16:10] GPU available: True, used: True
[00000181] [01-24-2023 23:16:10] TPU available: False, using: 0 TPU cores
[00000182] [01-24-2023 23:16:10] IPU available: False, using: 0 IPUs
[00000183] [01-24-2023 23:16:10] HPU available: False, using: 0 HPUs
[00000184] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000185] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000186] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000187] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000188] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000189] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000190] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000191] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000192] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000193] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000194] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000195] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000196] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000197] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000198] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000199] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000200] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000201] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000202] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000203] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000204] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000205] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000206] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000207] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000208] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000209] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000210] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000211] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000212] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000213] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000214] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000215] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000216] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000217] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000218] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000219] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000220] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000221] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000222] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000223] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000224] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000225] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000226] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000227] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000228] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000229] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000230] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000231] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000232] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:326: LightningDeprecationWarning: Base `LightningModule.on_train_batch_start` hook signature has changed in v1.5. The `dataloader_idx` argument will be removed in v1.7.
[00000233] [01-24-2023 23:16:10]   rank_zero_deprecation(
[00000234] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:335: LightningDeprecationWarning: The `on_keyboard_interrupt` callback hook was deprecated in v1.5 and will be removed in v1.7. Please use the `on_exception` callback hook instead.
[00000235] [01-24-2023 23:16:10]   rank_zero_deprecation(
[00000236] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:391: LightningDeprecationWarning: The `Callback.on_pretrain_routine_start` hook has been deprecated in v1.6 and will be removed in v1.8. Please use `Callback.on_fit_start` instead.
[00000237] [01-24-2023 23:16:10]   rank_zero_deprecation(
[00000238] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:342: LightningDeprecationWarning: Base `Callback.on_train_batch_end` hook signature has changed in v1.5. The `dataloader_idx` argument will be removed in v1.7.
[00000239] [01-24-2023 23:16:10]   rank_zero_deprecation(
[00000240] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000241] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000242] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000243] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000244] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000245] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000246] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000247] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000248] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000249] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000250] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000251] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000252] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000253] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000254] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000255] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000256] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000257] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000258] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000259] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000260] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000261] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000262] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000263] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000264] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000265] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000266] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000267] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000268] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000269] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000270] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000271] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000272] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000273] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000274] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000275] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000276] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000277] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000278] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000279] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000280] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:175: DeprecationWarning: LINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000281] [01-24-2023 23:16:10]   self.interpolation = {"linear": PIL.Image.LINEAR,
[00000282] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:176: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
[00000283] [01-24-2023 23:16:10]   "bilinear": PIL.Image.BILINEAR,
[00000284] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:177: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
[00000285] [01-24-2023 23:16:10]   "bicubic": PIL.Image.BICUBIC,
[00000286] [01-24-2023 23:16:10] E:\SDGUI-1.9.0\Data\repo\db\ldm\data\personalized.py:178: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
[00000287] [01-24-2023 23:16:10]   "lanczos": PIL.Image.LANCZOS,
[00000288] [01-24-2023 23:16:10] LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
[00000289] [01-24-2023 23:16:11]   | Name              | Type               | Params
[00000290] [01-24-2023 23:16:11] ---------------------------------------------------------
[00000291] [01-24-2023 23:16:11] 0 | model             | DiffusionWrapper   | 859 M 
[00000292] [01-24-2023 23:16:11] 1 | first_stage_model | AutoencoderKL      | 83.7 M
[00000293] [01-24-2023 23:16:11] 2 | cond_stage_model  | FrozenCLIPEmbedder | 123 M 
[00000294] [01-24-2023 23:16:11] ---------------------------------------------------------
[00000295] [01-24-2023 23:16:11] 982 M     Trainable params
[00000296] [01-24-2023 23:16:11] 83.7 M    Non-trainable params
[00000297] [01-24-2023 23:16:11] 1.1 B     Total params
[00000298] [01-24-2023 23:16:11] 4,264.941 Total estimated model params size (MB)
[00000299] [01-24-2023 23:16:11] Running on GPUs 0,
[00000300] [01-24-2023 23:16:11] Loading model from E:/AI/SDPORT/models/Stable-diffusion/f222_v1.ckpt
[00000301] [01-24-2023 23:16:11] LatentDiffusion: Running in eps-prediction mode
[00000302] [01-24-2023 23:16:11] DiffusionWrapper has 859.52 M params.
[00000303] [01-24-2023 23:16:11] making attention of type 'vanilla' with 512 in_channels
[00000304] [01-24-2023 23:16:11] Working with z of shape (1, 4, 64, 64) = 16384 dimensions.
[00000305] [01-24-2023 23:16:11] making attention of type 'vanilla' with 512 in_channels
[00000306] [01-24-2023 23:16:11] Restored from E:/AI/SDPORT/models/Stable-diffusion/f222_v1.ckpt with 12 missing and 2 unexpected keys
[00000307] [01-24-2023 23:16:11] Missing Keys: ['betas', 'alphas_cumprod', 'alphas_cumprod_prev', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'log_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod', 'posterior_variance', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2']
[00000308] [01-24-2023 23:16:11] Unexpected Keys: ['model_ema.decay', 'model_ema.num_updates']
[00000309] [01-24-2023 23:16:11] Monitoring val/loss_simple_ema as checkpoint metric.
[00000310] [01-24-2023 23:16:11] Merged modelckpt-cfg: 
[00000311] [01-24-2023 23:16:11] {'target': 'pytorch_lightning.callbacks.ModelCheckpoint', 'params': {'dirpath': 'E:/SDGUI-1.9.0/Data/sessions/2023-01-24-23-09-00/db/1674602120761\\checkpoints', 'filename': '{epoch:06}', 'verbose': True, 'save_last': True, 'monitor': 'val/loss_simple_ema', 'save_top_k': 0, 'every_n_train_steps': 1001}}
[00000312] [01-24-2023 23:16:11] #### Data #####
[00000313] [01-24-2023 23:16:11] train, PersonalizedBase, 1900
[00000314] [01-24-2023 23:16:11] reg, PersonalizedBase, 1900
[00000315] [01-24-2023 23:16:11] validation, PersonalizedBase, 19
[00000316] [01-24-2023 23:16:11] accumulate_grad_batches = 1
[00000317] [01-24-2023 23:16:11] ++++ NOT USING LR SCALING ++++
[00000318] [01-24-2023 23:16:11] Setting learning rate to 1.37e-06
[00000319] [01-24-2023 23:16:11] LatentDiffusion: Also optimizing conditioner params!
[00000320] [01-24-2023 23:16:11] Project config
[00000321] [01-24-2023 23:16:11] model:
[00000322] [01-24-2023 23:16:11]   base_learning_rate: 1.36800003051758e-06
[00000323] [01-24-2023 23:16:11]   target: ldm.models.diffusion.ddpm.LatentDiffusion
[00000324] [01-24-2023 23:16:11]   params:
[00000325] [01-24-2023 23:16:11]     reg_weight: 1.0
[00000326] [01-24-2023 23:16:11]     linear_start: 0.00085
[00000327] [01-24-2023 23:16:11]     linear_end: 0.012
[00000328] [01-24-2023 23:16:11]     num_timesteps_cond: 1
[00000329] [01-24-2023 23:16:11]     log_every_t: 200
[00000330] [01-24-2023 23:16:11]     timesteps: 1000
[00000331] [01-24-2023 23:16:11]     first_stage_key: image
[00000332] [01-24-2023 23:16:11]     cond_stage_key: caption
[00000333] [01-24-2023 23:16:11]     image_size: 64
[00000334] [01-24-2023 23:16:11]     channels: 4
[00000335] [01-24-2023 23:16:11]     cond_stage_trainable: true
[00000336] [01-24-2023 23:16:11]     conditioning_key: crossattn
[00000337] [01-24-2023 23:16:11]     monitor: val/loss_simple_ema
[00000338] [01-24-2023 23:16:11]     scale_factor: 0.18215
[00000339] [01-24-2023 23:16:11]     use_ema: false
[00000340] [01-24-2023 23:16:11]     embedding_reg_weight: 0.0
[00000341] [01-24-2023 23:16:11]     unfreeze_model: true
[00000342] [01-24-2023 23:16:11]     model_lr: 1.0e-06
[00000343] [01-24-2023 23:16:11]     personalization_config:
[00000344] [01-24-2023 23:16:11]       target: ldm.modules.embedding_manager.EmbeddingManager
[00000345] [01-24-2023 23:16:11]       params:
[00000346] [01-24-2023 23:16:11]         placeholder_strings:
[00000347] [01-24-2023 23:16:11]         - '*'
[00000348] [01-24-2023 23:16:11]         initializer_words:
[00000349] [01-24-2023 23:16:11]         - sculpture
[00000350] [01-24-2023 23:16:11]         per_image_tokens: false
[00000351] [01-24-2023 23:16:11]         num_vectors_per_token: 1
[00000352] [01-24-2023 23:16:11]         progressive_words: false
[00000353] [01-24-2023 23:16:11]     unet_config:
[00000354] [01-24-2023 23:16:11]       target: ldm.modules.diffusionmodules.openaimodel.UNetModel
[00000355] [01-24-2023 23:16:11]       params:
[00000356] [01-24-2023 23:16:11]         image_size: 32
[00000357] [01-24-2023 23:16:11]         in_channels: 4
[00000358] [01-24-2023 23:16:11]         out_channels: 4
[00000359] [01-24-2023 23:16:11]         model_channels: 320
[00000360] [01-24-2023 23:16:11]         attention_resolutions:
[00000361] [01-24-2023 23:16:11]         - 4
[00000362] [01-24-2023 23:16:11]         - 2
[00000363] [01-24-2023 23:16:11]         - 1
[00000364] [01-24-2023 23:16:11]         num_res_blocks: 2
[00000365] [01-24-2023 23:16:11]         channel_mult:
[00000366] [01-24-2023 23:16:11]         - 1
[00000367] [01-24-2023 23:16:11]         - 2
[00000368] [01-24-2023 23:16:11]         - 4
[00000369] [01-24-2023 23:16:11]         - 4
[00000370] [01-24-2023 23:16:11]         num_heads: 8
[00000371] [01-24-2023 23:16:11]         use_spatial_transformer: true
[00000372] [01-24-2023 23:16:11]         transformer_depth: 1
[00000373] [01-24-2023 23:16:11]         context_dim: 768
[00000374] [01-24-2023 23:16:11]         use_checkpoint: true
[00000375] [01-24-2023 23:16:11]         legacy: false
[00000376] [01-24-2023 23:16:11]     first_stage_config:
[00000377] [01-24-2023 23:16:11]       target: ldm.models.autoencoder.AutoencoderKL
[00000378] [01-24-2023 23:16:11]       params:
[00000379] [01-24-2023 23:16:11]         embed_dim: 4
[00000380] [01-24-2023 23:16:11]         monitor: val/rec_loss
[00000381] [01-24-2023 23:16:11]         ddconfig:
[00000382] [01-24-2023 23:16:11]           double_z: true
[00000383] [01-24-2023 23:16:11]           z_channels: 4
[00000384] [01-24-2023 23:16:11]           resolution: 512
[00000385] [01-24-2023 23:16:11]           in_channels: 3
[00000386] [01-24-2023 23:16:11]           out_ch: 3
[00000387] [01-24-2023 23:16:11]           ch: 128
[00000388] [01-24-2023 23:16:11]           ch_mult:
[00000389] [01-24-2023 23:16:11]           - 1
[00000390] [01-24-2023 23:16:11]           - 2
[00000391] [01-24-2023 23:16:11]           - 4
[00000392] [01-24-2023 23:16:11]           - 4
[00000393] [01-24-2023 23:16:11]           num_res_blocks: 2
[00000394] [01-24-2023 23:16:11]           attn_resolutions: []
[00000395] [01-24-2023 23:16:11]           dropout: 0.0
[00000396] [01-24-2023 23:16:11]         lossconfig:
[00000397] [01-24-2023 23:16:11]           target: torch.nn.Identity
[00000398] [01-24-2023 23:16:11]     cond_stage_config:
[00000399] [01-24-2023 23:16:11]       target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
[00000400] [01-24-2023 23:16:11]     ckpt_path: E:/AI/SDPORT/models/Stable-diffusion/f222_v1.ckpt
[00000401] [01-24-2023 23:16:11] data:
[00000402] [01-24-2023 23:16:11]   target: main.DataModuleFromConfig
[00000403] [01-24-2023 23:16:11]   params:
[00000404] [01-24-2023 23:16:11]     batch_size: 1
[00000405] [01-24-2023 23:16:11]     num_workers: 1
[00000406] [01-24-2023 23:16:11]     wrap: false
[00000407] [01-24-2023 23:16:11]     train:
[00000408] [01-24-2023 23:16:11]       target: ldm.data.personalized.PersonalizedBase
[00000409] [01-24-2023 23:16:11]       params:
[00000410] [01-24-2023 23:16:11]         size: 512
[00000411] [01-24-2023 23:16:11]         set: train
[00000412] [01-24-2023 23:16:11]         per_image_tokens: false
[00000413] [01-24-2023 23:16:11]         repeats: 100
[00000414] [01-24-2023 23:16:11]         placeholder_token: vikargi
[00000415] [01-24-2023 23:16:11]     reg:
[00000416] [01-24-2023 23:16:11]       target: ldm.data.personalized.PersonalizedBase
[00000417] [01-24-2023 23:16:11]       params:
[00000418] [01-24-2023 23:16:11]         size: 512
[00000419] [01-24-2023 23:16:11]         set: train
[00000420] [01-24-2023 23:16:11]         reg: true
[00000421] [01-24-2023 23:16:11]         per_image_tokens: false
[00000422] [01-24-2023 23:16:11]         repeats: 100
[00000423] [01-24-2023 23:16:11]         placeholder_token: vikargi
[00000424] [01-24-2023 23:16:11]     validation:
[00000425] [01-24-2023 23:16:11]       target: ldm.data.personalized.PersonalizedBase
[00000426] [01-24-2023 23:16:11]       params:
[00000427] [01-24-2023 23:16:11]         size: 512
[00000428] [01-24-2023 23:16:11]         set: val
[00000429] [01-24-2023 23:16:11]         per_image_tokens: false
[00000430] [01-24-2023 23:16:11]         repeats: 10
[00000431] [01-24-2023 23:16:11]         placeholder_token: vikargi
[00000432] [01-24-2023 23:16:11] Lightning config
[00000433] [01-24-2023 23:16:11] modelcheckpoint:
[00000434] [01-24-2023 23:16:11]   params:
[00000435] [01-24-2023 23:16:11]     every_n_train_steps: 1001
[00000436] [01-24-2023 23:16:11] callbacks:
[00000437] [01-24-2023 23:16:11]   image_logger:
[00000438] [01-24-2023 23:16:11]     target: main.ImageLogger
[00000439] [01-24-2023 23:16:11]     params:
[00000440] [01-24-2023 23:16:11]       batch_frequency: 250
[00000441] [01-24-2023 23:16:11]       max_images: 8
[00000442] [01-24-2023 23:16:11]       increase_log_steps: false
[00000443] [01-24-2023 23:16:11] trainer:
[00000444] [01-24-2023 23:16:11]   benchmark: true
[00000445] [01-24-2023 23:16:11]   max_steps: 1000
[00000446] [01-24-2023 23:16:11]   gpus: 0,
[00000447] [01-24-2023 23:16:12] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\connectors\data_connector.py:240: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 12 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
[00000448] [01-24-2023 23:16:12]   rank_zero_warn(
[00000449] [01-24-2023 23:16:16] Sanity Checking: 0it [00:00, ?it/s]
[00000450] [01-24-2023 23:16:16] Sanity Checking:   0%|          | 0/2 [00:00<?, ?it/s]
[00000451] [01-24-2023 23:16:21] Sanity Checking DataLoader 0:   0%|          | 0/2 [00:00<?, ?it/s]
[00000452] [01-24-2023 23:16:21] Sanity Checking DataLoader 0:  50%|#####     | 1/2 [00:04<00:04,  4.96s/it]
[00000453] [01-24-2023 23:16:21] Sanity Checking DataLoader 0:  50%|#####     | 1/2 [00:04<00:04,  4.96s/it]
[00000454] [01-24-2023 23:16:21] Sanity Checking DataLoader 0: 100%|##########| 2/2 [00:05<00:00,  2.16s/it]
[00000455] [01-24-2023 23:16:21] Sanity Checking DataLoader 0: 100%|##########| 2/2 [00:05<00:00,  2.16s/it]
[00000456] [01-24-2023 23:16:21] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\connectors\data_connector.py:240: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 12 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
[00000457] [01-24-2023 23:16:21]   rank_zero_warn(
[00000458] [01-24-2023 23:16:21] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py:2102: LightningDeprecationWarning: `Trainer.root_gpu` is deprecated in v1.6 and will be removed in v1.8. Please use `Trainer.strategy.root_device.index` instead.
[00000459] [01-24-2023 23:16:21]   rank_zero_deprecation(
[00000460] [01-24-2023 23:16:21] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py:2102: LightningDeprecationWarning: `Trainer.root_gpu` is deprecated in v1.6 and will be removed in v1.8. Please use `Trainer.strategy.root_device.index` instead.
[00000461] [01-24-2023 23:16:21]   rank_zero_deprecation(
[00000462] [01-24-2023 23:16:21] Training: 0it [00:00, ?it/s]
[00000464] [01-24-2023 23:16:21] Training:   0%|          | 0/1919 [00:00<?, ?it/s]
[00000466] [01-24-2023 23:16:24] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\utilities\data.py:72: UserWarning: Trying to infer the `batch_size` from an ambiguous collection. The batch size we found is 1. To avoid any miscalculations, use `self.log(..., batch_size=batch_size)`.
[00000467] [01-24-2023 23:16:24]   warning_cache.warn(
[00000468] [01-24-2023 23:16:24] E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:229: UserWarning: You called `self.log('global_step', ...)` in your `training_step` but the value needs to be floating point. Converting it to torch.float32.
[00000469] [01-24-2023 23:16:24]   warning_cache.warn(
[00000470] [01-24-2023 23:16:28] Epoch 0:   0%|          | 0/1919 [00:00<?, ?it/s] 
[00000471] [01-24-2023 23:16:28] Epoch 0:   0%|          | 1/1919 [00:07<3:46:54,  7.10s/it]
[00000472] [01-24-2023 23:16:28] Epoch 0:   0%|          | 1/1919 [00:07<3:46:54,  7.10s/it]
[00000473] [01-24-2023 23:16:29] Epoch 0:   0%|          | 1/1919 [00:07<3:47:33,  7.12s/it, loss=0.0338, v_num=0, train/loss_simple_step=0.00448, train/loss_vlb_step=2.57e-5, train/loss_step=0.00448, global_step=0.000]Summoning checkpoint.
[00000474] [01-24-2023 23:16:29] Traceback (most recent call last):
[00000475] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 723, in _call_and_handle_interrupt
[00000476] [01-24-2023 23:16:29]     return trainer_fn(*args, **kwargs)
[00000477] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 811, in _fit_impl
[00000478] [01-24-2023 23:16:29]     results = self._run(model, ckpt_path=self.ckpt_path)
[00000479] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1236, in _run
[00000481] [01-24-2023 23:16:29]     results = self._run_stage()
[00000482] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1323, in _run_stage
[00000483] [01-24-2023 23:16:29]     return self._run_train()
[00000484] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1353, in _run_train
[00000485] [01-24-2023 23:16:29]     self.fit_loop.run()
[00000486] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\base.py", line 204, in run
[00000487] [01-24-2023 23:16:29]     self.advance(*args, **kwargs)
[00000488] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 266, in advance
[00000489] [01-24-2023 23:16:29]     self._outputs = self.epoch_loop.run(self._data_fetcher)
[00000490] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\base.py", line 204, in run
[00000491] [01-24-2023 23:16:29]     self.advance(*args, **kwargs)
[00000492] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 208, in advance
[00000493] [01-24-2023 23:16:29]     batch_output = self.batch_loop.run(batch, batch_idx)
[00000494] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\base.py", line 204, in run
[00000495] [01-24-2023 23:16:29]     self.advance(*args, **kwargs)
[00000496] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\batch\training_batch_loop.py", line 88, in advance
[00000497] [01-24-2023 23:16:29]     outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
[00000498] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\base.py", line 204, in run
[00000499] [01-24-2023 23:16:29]     self.advance(*args, **kwargs)
[00000500] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 203, in advance
[00000501] [01-24-2023 23:16:29]     result = self._run_optimization(
[00000502] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 256, in _run_optimization
[00000503] [01-24-2023 23:16:29]     self._optimizer_step(optimizer, opt_idx, batch_idx, closure)
[00000504] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 369, in _optimizer_step
[00000505] [01-24-2023 23:16:29]     self.trainer._call_lightning_module_hook(
[00000506] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1595, in _call_lightning_module_hook
[00000507] [01-24-2023 23:16:29]     output = fn(*args, **kwargs)
[00000508] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\core\lightning.py", line 1646, in optimizer_step
[00000509] [01-24-2023 23:16:29]     optimizer.step(closure=optimizer_closure)
[00000510] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\core\optimizer.py", line 168, in step
[00000511] [01-24-2023 23:16:29]     step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
[00000512] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 193, in optimizer_step
[00000513] [01-24-2023 23:16:29]     return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
[00000514] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\plugins\precision\precision_plugin.py", line 155, in optimizer_step
[00000515] [01-24-2023 23:16:29]     return optimizer.step(closure=closure, **kwargs)
[00000516] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\torch\optim\optimizer.py", line 88, in wrapper
[00000517] [01-24-2023 23:16:29]     return func(*args, **kwargs)
[00000518] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
[00000519] [01-24-2023 23:16:29]     return func(*args, **kwargs)
[00000520] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\torch\optim\adamw.py", line 100, in step
[00000521] [01-24-2023 23:16:29]     loss = closure()
[00000522] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\plugins\precision\precision_plugin.py", line 140, in _wrap_closure
[00000523] [01-24-2023 23:16:29]     closure_result = closure()
[00000524] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 148, in __call__
[00000525] [01-24-2023 23:16:29]     self._result = self.closure(*args, **kwargs)
[00000526] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 143, in closure
[00000527] [01-24-2023 23:16:29]     self._backward_fn(step_output.closure_loss)
[00000528] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 311, in backward_fn
[00000529] [01-24-2023 23:16:29]     self.trainer._call_strategy_hook("backward", loss, optimizer, opt_idx)
[00000530] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1765, in _call_strategy_hook
[00000531] [01-24-2023 23:16:29]     output = fn(*args, **kwargs)
[00000532] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 168, in backward
[00000533] [01-24-2023 23:16:29]     self.precision_plugin.backward(self.lightning_module, closure_loss, *args, **kwargs)
[00000534] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\plugins\precision\precision_plugin.py", line 80, in backward
[00000535] [01-24-2023 23:16:29]     model.backward(closure_loss, optimizer, *args, **kwargs)
[00000536] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\core\lightning.py", line 1391, in backward
[00000537] [01-24-2023 23:16:29]     loss.backward(*args, **kwargs)
[00000538] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\torch\_tensor.py", line 363, in backward
[00000539] [01-24-2023 23:16:29]     torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
[00000540] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\torch\autograd\__init__.py", line 173, in backward
[00000541] [01-24-2023 23:16:29]     Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[00000542] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\torch\autograd\function.py", line 253, in apply
[00000543] [01-24-2023 23:16:29]     return user_fn(self, *args)
[00000544] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\repo\db\ldm\modules\diffusionmodules\util.py", line 139, in backward
[00000545] [01-24-2023 23:16:29]     input_grads = torch.autograd.grad(
[00000546] [01-24-2023 23:16:29]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\torch\autograd\__init__.py", line 275, in grad
[00000547] [01-24-2023 23:16:29]     return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[00000548] [01-24-2023 23:16:29] RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 23.99 GiB total capacity; 19.24 GiB already allocated; 1.64 GiB free; 19.79 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[00000551] [01-24-2023 23:20:26] During handling of the above exception, another exception occurred:
[00000552] [01-24-2023 23:20:26] Traceback (most recent call last):
[00000553] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\repo\db\main.py", line 837, in <module>
[00000554] [01-24-2023 23:20:26]     trainer.fit(model, data)
[00000555] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 770, in fit
[00000556] [01-24-2023 23:20:26]     self._call_and_handle_interrupt(
[00000557] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 738, in _call_and_handle_interrupt
[00000558] [01-24-2023 23:20:26]     self._teardown()
[00000559] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1300, in _teardown
[00000560] [01-24-2023 23:20:26]     self.strategy.teardown()
[00000561] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\strategies\single_device.py", line 93, in teardown
[00000562] [01-24-2023 23:20:26]     super().teardown()
[00000563] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 444, in teardown
[00000564] [01-24-2023 23:20:26]     optimizers_to_device(self.optimizers, torch.device("cpu"))
[00000565] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\utilities\optimizer.py", line 27, in optimizers_to_device
[00000566] [01-24-2023 23:20:26]     optimizer_to_device(opt, device)
[00000567] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\utilities\optimizer.py", line 33, in optimizer_to_device
[00000568] [01-24-2023 23:20:26]     optimizer.state[p] = apply_to_collection(v, torch.Tensor, move_data_to_device, device)
[00000569] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 107, in apply_to_collection
[00000570] [01-24-2023 23:20:26]     v = apply_to_collection(
[00000571] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 99, in apply_to_collection
[00000572] [01-24-2023 23:20:26]     return function(data, *args, **kwargs)
[00000573] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 354, in move_data_to_device
[00000574] [01-24-2023 23:20:26]     return apply_to_collection(batch, dtype=dtype, function=batch_to)
[00000575] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 99, in apply_to_collection
[00000576] [01-24-2023 23:20:26]     return function(data, *args, **kwargs)
[00000577] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 347, in batch_to
[00000578] [01-24-2023 23:20:26]     data_output = data.to(device, **kwargs)
[00000579] [01-24-2023 23:20:26] RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 3686400 bytes.
[00000580] [01-24-2023 23:20:26] During handling of the above exception, another exception occurred:
[00000581] [01-24-2023 23:20:26] Traceback (most recent call last):
[00000582] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\repo\db\main.py", line 839, in <module>
[00000583] [01-24-2023 23:20:26]     melk()
[00000584] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\repo\db\main.py", line 819, in melk
[00000585] [01-24-2023 23:20:26]     trainer.save_checkpoint(ckpt_path)
[00000586] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 2467, in save_checkpoint
[00000587] [01-24-2023 23:20:26]     self._checkpoint_connector.save_checkpoint(filepath, weights_only=weights_only, storage_options=storage_options)
[00000588] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\trainer\connectors\checkpoint_connector.py", line 445, in save_checkpoint
[00000589] [01-24-2023 23:20:26]     self.trainer.strategy.save_checkpoint(_checkpoint, filepath, storage_options=storage_options)
[00000590] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 418, in save_checkpoint
[00000591] [01-24-2023 23:20:26]     self.checkpoint_io.save_checkpoint(checkpoint, filepath, storage_options=storage_options)
[00000592] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\plugins\io\torch_plugin.py", line 54, in save_checkpoint
[00000593] [01-24-2023 23:20:26]     atomic_save(checkpoint, path)
[00000594] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\pytorch_lightning\utilities\cloud_io.py", line 67, in atomic_save
[00000595] [01-24-2023 23:20:26]     torch.save(checkpoint, bytesbuffer)
[00000596] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\torch\serialization.py", line 380, in save
[00000597] [01-24-2023 23:20:26]     _save(obj, opened_zipfile, pickle_module, pickle_protocol)
[00000598] [01-24-2023 23:20:26]   File "E:\SDGUI-1.9.0\Data\venv\lib\site-packages\torch\serialization.py", line 589, in _save
[00000599] [01-24-2023 23:20:26]     pickler.dump(obj)
[00000600] [01-24-2023 23:20:26] MemoryError