d8ahazard / sd_dreambooth_extension

Other
1.86k stars 281 forks source link

[Bug]: Exception training model: 'local variable 'instance_loss' referenced before assignment'. #1357

Closed HOHOHOLO closed 10 months ago

HOHOHOLO commented 11 months ago

Is there an existing issue for this?

What happened?

Windows 11, Chrome browser I've tried many times and still the same, first I had a cuda problem it was saying it's not existing I re-cloned bitsandbytes and now the not-existing problem solved, but the local variable error still occurs

that's how it looks:

image

Steps to reproduce the problem

I create the model as in the screenshot, then I finish setting up training settings, then I generate class images, after that when I click on Train, this error appears

Commit and libraries

Initializing Dreambooth Dreambooth revision: 1a1d1621086a4725fda1200256f319c845dc7a8a Successfully installed accelerate-0.23.0 fastapi-0.94.1 transformers-4.32.1

[+] xformers version 0.0.22 installed. [+] torch version 2.0.1+cu118 installed. [+] torchvision version 0.15.2+cu118 installed. [+] accelerate version 0.23.0 installed. [+] diffusers version 0.21.4 installed. [+] transformers version 4.32.1 installed. [+] bitsandbytes version 0.41.1 installed. Launching Web UI with arguments: --xformers --autolaunch

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --xformers --autolaunch

call webui.bat

Console logs

venv "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
Installing requirements
If submitting an issue on github, please provide the full startup log for debugging purposes.

Initializing Dreambooth
Dreambooth revision: 1a1d1621086a4725fda1200256f319c845dc7a8a
Successfully installed accelerate-0.23.0 fastapi-0.94.1 transformers-4.32.1

[+] xformers version 0.0.22 installed.
[+] torch version 2.0.1+cu118 installed.
[+] torchvision version 0.15.2+cu118 installed.
[+] accelerate version 0.23.0 installed.
[+] diffusers version 0.21.4 installed.
[+] transformers version 4.32.1 installed.
[+] bitsandbytes version 0.41.1 installed.
Launching Web UI with arguments: --xformers --autolaunch
2023-09-29 23:01:12,823 - ControlNet - INFO - ControlNet v1.1.410
ControlNet preprocessor location: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-09-29 23:01:12,892 - ControlNet - INFO - ControlNet v1.1.410
Loading weights [6941a8ad9b] from C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\Stable-diffusion\perfectholo.ckpt
C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\scripts\main.py:301: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  with gr.Row().style(equal_height=False):
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 36.8s (prepare environment: 30.4s, import torch: 2.3s, import gradio: 0.6s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 1.4s, create ui: 0.6s, gradio launch: 0.2s).
Creating model from config: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\configs\v1-inference.yaml
Applying attention optimization: xformers... done.
Model loaded in 3.6s (load weights from disk: 0.9s, create model: 0.5s, apply weights to model: 0.7s, load textual inversion embeddings: 0.8s, calculate empty prompt: 0.7s).
Reusing loaded model perfectholo.ckpt [6941a8ad9b] to load dreamshaper_6BakedVae.safetensors [c249d7853b]
Loading weights [c249d7853b] from C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\Stable-diffusion\dreamshaper_6BakedVae.safetensors
Applying attention optimization: xformers... done.
Weights loaded in 1.6s (send model to cpu: 0.6s, load weights from disk: 0.1s, apply weights to model: 0.3s, move model to device: 0.6s).
Extracting config from C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\..\configs\v1-training-default.yaml
Extracting checkpoint from C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\Stable-diffusion\dreamshaper_6BakedVae.safetensors
C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
Duration: 00:00:11
Updating scheduler name to: DDIM
Please check your dataset directories.
Duration: 00:00:00
Pre-processing images: classifiers_0: : 48it [00:00, 144.48it/s]We need a total of 24 class images.
Pre-processing images: classifiers_0: : 49it [00:00, 142.65it/s]
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:02<00:00,  2.67it/s]
Using scheduler: DEISMultistep: 100%|████████████████████████████████████████████████████| 7/7 [00:02<00:00,  2.52it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.77it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.30it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.35it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.32it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.32it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.30it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.31it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.24it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.31it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.23it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.54it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.30it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.25it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.23it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.25it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.30it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.21it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.16it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  8.45it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  8.73it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.23it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.21it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.21it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 40/40 [00:04<00:00,  9.59it/s]
Generating class images 24/24:: 100%|██████████████████████████████████████████████████| 24/24 [00:00<00:00, 90.20it/s]
Duration: 00:02:06
Initializing dreambooth training...
                                                                                                                       WARNING:dreambooth.optimization:Exception importing 8bit AdamW: No module named 'bitsandbytes.optim'00<00:00,  3.59it/s]
Traceback (most recent call last):
  File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\optimization.py", line 594, in get_optimizer
    from bitsandbytes.optim import AdamW8bit
ModuleNotFoundError: No module named 'bitsandbytes.optim'
No module named 'bitsandbytes.optim'
WARNING: Using default optimizer (AdamW from Torch)
                                                                                                                       Init dataset!set:   0%|                                                                           | 0/5 [00:00<?, ?it/s]
Preparing Dataset (With Caching)
Bucket 0 (512, 512, 0) - Instance Images: 24 | Class Images: 24 | Max Examples/batch: 48
                                                                                                                       Saving cache!ed latents...: 100%|███████████████████████████████████████████████████████| 48/48 [00:03<00:00, 15.78it/s]
Total Buckets 1 - Instance Images: 24 | Class Images: 24 | Max Examples/batch: 48

Total images / batch: 48, total examples: 48███████████████████████████████████████████| 48/48 [00:03<00:00, 15.78it/s]
                  Initializing bucket counter!
Steps:   0%|                                                                                  | 0/2880 [00:00<?, ?it/s]Traceback (most recent call last):
  File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\ui_functions.py", line 730, in start_training
    result = main(class_gen_method=class_gen_method)
  File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1809, in main
    return inner_loop()
  File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\memory.py", line 126, in decorator
    return function(batch_size, grad_size, prof, *args, **kwargs)
  File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1684, in inner_loop
    logs["inst_loss"] = float(instance_loss.detach().item())
UnboundLocalError: local variable 'instance_loss' referenced before assignment
Steps:   0%|                                                                                  | 0/2880 [00:02<?, ?it/s]
Duration: 00:00:09
Duration: 00:00:10

0it [00:13, ?it/s]

Additional information

No response

HenryYoon commented 11 months ago

I have just solved it! Just unselect 'Calculate Split Loss' button on the testing tab. It will help. Thanks!

HOHOHOLO commented 11 months ago

After Training is done I see this: Exception training model: ''AttnProcessor2_0' object has no attribute 'state_dict''.

HOHOHOLO commented 11 months ago

Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1400.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1400.png Cleanup log parse. Steps: 67%|█████████████████████████▎ | 1600/2400 [13:03<04:59, 2.67it/s, loss=0.0113, lr=2e-6, vram=12.4]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.82it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.50it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.06s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1600.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1600.png Cleanup log parse. Steps: 75%|███████████████████████████▊ | 1800/2400 [14:25<03:44, 2.67it/s, loss=0.00502, lr=2e-6, vram=12.4]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 12.48it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.51it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.09s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call__ prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1800.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1800.png Cleanup log parse. Steps: 83%|███████████████████████████████▋ | 2000/2400 [15:43<02:20, 2.85it/s, loss=0.0187, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.04it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.57it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.02s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_2000.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_2000.png Cleanup log parse. Steps: 92%|██████████████████████████████████▊ | 2200/2400 [16:59<01:09, 2.86it/s, loss=0.0251, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.93it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.33it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.09s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call__ prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_2200.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_2200.png Cleanup log parse. Steps: 100%|███████████████████████████████████████| 2400/2400 [18:22<00:00, 2.57it/s, loss=0.167, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.61it/s] Traceback (most recent call last): | 0/1 [00:00<?, ?it/s] File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\ui_functions.py", line 730, in start_training result = main(class_gen_method=class_gen_method) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1809, in main return inner_loop() File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\memory.py", line 126, in decorator return function(batch_size, grad_size, prof, args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1766, in inner_loop check_save(True) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 976, in check_save save_weights( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1100, in save_weights unet_lora_layers_to_save = unet_attn_processors_state_dict(unet) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\utils\model_utils.py", line 242, in unet_attn_processors_state_dict for parameter_key, parameter in attn_processor.state_dict().items(): AttributeError: 'AttnProcessor2_0' object has no attribute 'state_dict' Steps: 100%|███████████████████████████████████████| 2400/2400 [18:23<00:00, 2.17it/s, loss=0.167, lr=2e-6, vram=11.6] Duration: 00:18:46 Saving Lora Weights...: 0%| | 0/1 [00:00<?, ?it/s] Duration: 00:18:47

HOHOHOLO commented 11 months ago

I have just solved it! Just unselect 'Calculate Split Loss' button on the testing tab. It will help. Thanks! After Training is done I see this: Exception training model: ''AttnProcessor2_0' object has no attribute 'state_dict''. And these are the console logs: Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1400.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1400.png Cleanup log parse. Steps: 67%|█████████████████████████▎ | 1600/2400 [13:03<04:59, 2.67it/s, loss=0.0113, lr=2e-6, vram=12.4]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.82it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.50it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.06s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1600.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1600.png Cleanup log parse. Steps: 75%|███████████████████████████▊ | 1800/2400 [14:25<03:44, 2.67it/s, loss=0.00502, lr=2e-6, vram=12.4]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 12.48it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.51it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.09s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call__ prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1800.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1800.png Cleanup log parse. Steps: 83%|███████████████████████████████▋ | 2000/2400 [15:43<02:20, 2.85it/s, loss=0.0187, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.04it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.57it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.02s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_2000.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_2000.png Cleanup log parse. Steps: 92%|██████████████████████████████████▊ | 2200/2400 [16:59<01:09, 2.86it/s, loss=0.0251, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.93it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.33it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.09s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call__ prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_2200.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_2200.png Cleanup log parse. Steps: 100%|███████████████████████████████████████| 2400/2400 [18:22<00:00, 2.57it/s, loss=0.167, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.61it/s] Traceback (most recent call last): | 0/1 [00:00<?, ?it/s] File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\ui_functions.py", line 730, in start_training result = main(class_gen_method=class_gen_method) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1809, in main return inner_loop() File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\memory.py", line 126, in decorator return function(batch_size, grad_size, prof, args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1766, in inner_loop check_save(True) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 976, in check_save save_weights( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1100, in save_weights unet_lora_layers_to_save = unet_attn_processors_state_dict(unet) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\utils\model_utils.py", line 242, in unet_attn_processors_state_dict for parameter_key, parameter in attn_processor.state_dict().items(): AttributeError: 'AttnProcessor2_0' object has no attribute 'state_dict' Steps: 100%|███████████████████████████████████████| 2400/2400 [18:23<00:00, 2.17it/s, loss=0.167, lr=2e-6, vram=11.6] Duration: 00:18:46 Saving Lora Weights...: 0%| | 0/1 [00:00<?, ?it/s] Duration: 00:18:47

fredrick00 commented 10 months ago

I have just solved it! Just unselect 'Calculate Split Loss' button on the testing tab. It will help. Thanks!

github-actions[bot] commented 10 months ago

This issue is stale because it has been open 5 days with no activity. Remove stale label or comment or this will be closed in 5 days