[X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
What happened?
Unable to train SD V1X model, asking for help, don't know what went wrong
Steps to reproduce the problem
Go to ....
Press ....
...
Commit and libraries
To create a public link, set share=True in launch().
Startup time: 44.9s (prepare environment: 17.8s, import torch: 3.5s, import gradio: 1.2s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 2.0s, create ui: 1.2s, gradio launch: 16.8s).
Advanced elements visible: False
Initializing dreambooth training...
Init dataset!set: 0%| | 0/5 [00:00<?, ?it/s]
Preparing Dataset (With Caching)
Bucket 0 (1280, 1280, 0) - Instance Images: 322 | Class Images: 0 | Max Examples/batch: 322
Saving cache!ed latents...: 100%|█████████████████████████████████████████████████████| 322/322 [00:58<00:00, 5.58it/s]
Total Buckets 1 - Instance Images: 322 | Class Images: 0 | Max Examples/batch: 322
Total images / batch: 322, total examples: 322███████████████████████████████████████| 322/322 [00:58<00:00, 5.58it/s]
Initializing bucket counter!
Steps: 0%| | 2/48300 [00:12<80:38:55, 6.01s/it, loss=nan, lr=8e-6, vram=15.2]WARNING:dreambooth.train_dreambooth:Loss is NaN, your model is dead. Cancelling training.
D:\gj\WEIBU\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:02<00:00, 3.12it/s]
Renaming encoder.mid.attn_1.to_q.weight to encoder.mid.attn_1.q.weight | 0/1 [00:00<?, ?it/s]
Renaming encoder.mid.attn_1.to_q.bias to encoder.mid.attn_1.q.bias
Renaming encoder.mid.attn_1.to_k.weight to encoder.mid.attn_1.k.weight
Renaming encoder.mid.attn_1.to_k.bias to encoder.mid.attn_1.k.bias
Renaming encoder.mid.attn_1.to_v.weight to encoder.mid.attn_1.v.weight
Renaming encoder.mid.attn_1.to_v.bias to encoder.mid.attn_1.v.bias
Renaming encoder.mid.attn_1.to_out.0.weight to encoder.mid.attn_1.proj_out.weight
Renaming encoder.mid.attn_1.to_out.0.bias to encoder.mid.attn_1.proj_out.bias
Renaming decoder.mid.attn_1.to_q.weight to decoder.mid.attn_1.q.weight
Renaming decoder.mid.attn_1.to_q.bias to decoder.mid.attn_1.q.bias
Renaming decoder.mid.attn_1.to_k.weight to decoder.mid.attn_1.k.weight
Renaming decoder.mid.attn_1.to_k.bias to decoder.mid.attn_1.k.bias
Renaming decoder.mid.attn_1.to_v.weight to decoder.mid.attn_1.v.weight
Renaming decoder.mid.attn_1.to_v.bias to decoder.mid.attn_1.v.bias
Renaming decoder.mid.attn_1.to_out.0.weight to decoder.mid.attn_1.proj_out.weight
Renaming decoder.mid.attn_1.to_out.0.bias to decoder.mid.attn_1.proj_out.bias
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.78it/s]
Compiling Checkpoint: : 3it [00:05, 1.90s/it] Model name: A2mples: 33%|█████████████████████▎ | 1/3 [00:13<00:27, 13.56s/it]
D:\gj\WEIBU\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead.
all_df_loss = all_df_loss.fillna(method="ffill")
Saving D:\gj\WEIBU\stable-diffusion-webui\models\dreambooth\A2\logging\loss_plot_2.png
Saving D:\gj\WEIBU\stable-diffusion-webui\models\dreambooth\A2\logging\ram_plot_2.png
Cleanup log parse.
Command Line Arguments
y
Console logs
To create a public link, set `share=True` in `launch()`.
Startup time: 44.9s (prepare environment: 17.8s, import torch: 3.5s, import gradio: 1.2s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 2.0s, create ui: 1.2s, gradio launch: 16.8s).
Advanced elements visible: False
Initializing dreambooth training...
Init dataset!set: 0%| | 0/5 [00:00<?, ?it/s]
Preparing Dataset (With Caching)
Bucket 0 (1280, 1280, 0) - Instance Images: 322 | Class Images: 0 | Max Examples/batch: 322
Saving cache!ed latents...: 100%|█████████████████████████████████████████████████████| 322/322 [00:58<00:00, 5.58it/s]
Total Buckets 1 - Instance Images: 322 | Class Images: 0 | Max Examples/batch: 322
Total images / batch: 322, total examples: 322███████████████████████████████████████| 322/322 [00:58<00:00, 5.58it/s]
Initializing bucket counter!
Steps: 0%| | 2/48300 [00:12<80:38:55, 6.01s/it, loss=nan, lr=8e-6, vram=15.2]WARNING:dreambooth.train_dreambooth:Loss is NaN, your model is dead. Cancelling training.
D:\gj\WEIBU\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:02<00:00, 3.12it/s]
Renaming encoder.mid.attn_1.to_q.weight to encoder.mid.attn_1.q.weight | 0/1 [00:00<?, ?it/s]
Renaming encoder.mid.attn_1.to_q.bias to encoder.mid.attn_1.q.bias
Renaming encoder.mid.attn_1.to_k.weight to encoder.mid.attn_1.k.weight
Renaming encoder.mid.attn_1.to_k.bias to encoder.mid.attn_1.k.bias
Renaming encoder.mid.attn_1.to_v.weight to encoder.mid.attn_1.v.weight
Renaming encoder.mid.attn_1.to_v.bias to encoder.mid.attn_1.v.bias
Renaming encoder.mid.attn_1.to_out.0.weight to encoder.mid.attn_1.proj_out.weight
Renaming encoder.mid.attn_1.to_out.0.bias to encoder.mid.attn_1.proj_out.bias
Renaming decoder.mid.attn_1.to_q.weight to decoder.mid.attn_1.q.weight
Renaming decoder.mid.attn_1.to_q.bias to decoder.mid.attn_1.q.bias
Renaming decoder.mid.attn_1.to_k.weight to decoder.mid.attn_1.k.weight
Renaming decoder.mid.attn_1.to_k.bias to decoder.mid.attn_1.k.bias
Renaming decoder.mid.attn_1.to_v.weight to decoder.mid.attn_1.v.weight
Renaming decoder.mid.attn_1.to_v.bias to decoder.mid.attn_1.v.bias
Renaming decoder.mid.attn_1.to_out.0.weight to decoder.mid.attn_1.proj_out.weight
Renaming decoder.mid.attn_1.to_out.0.bias to decoder.mid.attn_1.proj_out.bias
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.78it/s]
Compiling Checkpoint: : 3it [00:05, 1.90s/it] Model name: A2mples: 33%|█████████████████████▎ | 1/3 [00:13<00:27, 13.56s/it]
D:\gj\WEIBU\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead.
all_df_loss = all_df_loss.fillna(method="ffill")
Saving D:\gj\WEIBU\stable-diffusion-webui\models\dreambooth\A2\logging\loss_plot_2.png
Saving D:\gj\WEIBU\stable-diffusion-webui\models\dreambooth\A2\logging\ram_plot_2.png
Cleanup log parse.
Is there an existing issue for this?
What happened?
Unable to train SD V1X model, asking for help, don't know what went wrong
Steps to reproduce the problem
Commit and libraries
To create a public link, set
share=True
inlaunch()
. Startup time: 44.9s (prepare environment: 17.8s, import torch: 3.5s, import gradio: 1.2s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.7s, setup codeformer: 0.1s, load scripts: 2.0s, create ui: 1.2s, gradio launch: 16.8s). Advanced elements visible: False Initializing dreambooth training... Init dataset!set: 0%| | 0/5 [00:00<?, ?it/s] Preparing Dataset (With Caching) Bucket 0 (1280, 1280, 0) - Instance Images: 322 | Class Images: 0 | Max Examples/batch: 322 Saving cache!ed latents...: 100%|█████████████████████████████████████████████████████| 322/322 [00:58<00:00, 5.58it/s] Total Buckets 1 - Instance Images: 322 | Class Images: 0 | Max Examples/batch: 322Total images / batch: 322, total examples: 322███████████████████████████████████████| 322/322 [00:58<00:00, 5.58it/s] Initializing bucket counter! Steps: 0%| | 2/48300 [00:12<80:38:55, 6.01s/it, loss=nan, lr=8e-6, vram=15.2]WARNING:dreambooth.train_dreambooth:Loss is NaN, your model is dead. Cancelling training. D:\gj\WEIBU\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:02<00:00, 3.12it/s] Renaming encoder.mid.attn_1.to_q.weight to encoder.mid.attn_1.q.weight | 0/1 [00:00<?, ?it/s] Renaming encoder.mid.attn_1.to_q.bias to encoder.mid.attn_1.q.bias Renaming encoder.mid.attn_1.to_k.weight to encoder.mid.attn_1.k.weight Renaming encoder.mid.attn_1.to_k.bias to encoder.mid.attn_1.k.bias Renaming encoder.mid.attn_1.to_v.weight to encoder.mid.attn_1.v.weight Renaming encoder.mid.attn_1.to_v.bias to encoder.mid.attn_1.v.bias Renaming encoder.mid.attn_1.to_out.0.weight to encoder.mid.attn_1.proj_out.weight Renaming encoder.mid.attn_1.to_out.0.bias to encoder.mid.attn_1.proj_out.bias Renaming decoder.mid.attn_1.to_q.weight to decoder.mid.attn_1.q.weight Renaming decoder.mid.attn_1.to_q.bias to decoder.mid.attn_1.q.bias Renaming decoder.mid.attn_1.to_k.weight to decoder.mid.attn_1.k.weight Renaming decoder.mid.attn_1.to_k.bias to decoder.mid.attn_1.k.bias Renaming decoder.mid.attn_1.to_v.weight to decoder.mid.attn_1.v.weight Renaming decoder.mid.attn_1.to_v.bias to decoder.mid.attn_1.v.bias Renaming decoder.mid.attn_1.to_out.0.weight to decoder.mid.attn_1.proj_out.weight Renaming decoder.mid.attn_1.to_out.0.bias to decoder.mid.attn_1.proj_out.bias Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:03<00:00, 1.78it/s] Compiling Checkpoint: : 3it [00:05, 1.90s/it] Model name: A2mples: 33%|█████████████████████▎ | 1/3 [00:13<00:27, 13.56s/it] D:\gj\WEIBU\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving D:\gj\WEIBU\stable-diffusion-webui\models\dreambooth\A2\logging\loss_plot_2.png Saving D:\gj\WEIBU\stable-diffusion-webui\models\dreambooth\A2\logging\ram_plot_2.png Cleanup log parse.
Command Line Arguments
Console logs
Additional information
No response