Closed HOHOHOLO closed 10 months ago
I have just solved it! Just unselect 'Calculate Split Loss' button on the testing tab. It will help. Thanks!
After Training is done I see this: Exception training model: ''AttnProcessor2_0' object has no attribute 'state_dict''.
Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1400.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1400.png Cleanup log parse. Steps: 67%|█████████████████████████▎ | 1600/2400 [13:03<04:59, 2.67it/s, loss=0.0113, lr=2e-6, vram=12.4]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.82it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.50it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.06s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1600.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1600.png Cleanup log parse. Steps: 75%|███████████████████████████▊ | 1800/2400 [14:25<03:44, 2.67it/s, loss=0.00502, lr=2e-6, vram=12.4]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 12.48it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.51it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.09s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call__ prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1800.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1800.png Cleanup log parse. Steps: 83%|███████████████████████████████▋ | 2000/2400 [15:43<02:20, 2.85it/s, loss=0.0187, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.04it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.57it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.02s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_2000.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_2000.png Cleanup log parse. Steps: 92%|██████████████████████████████████▊ | 2200/2400 [16:59<01:09, 2.86it/s, loss=0.0251, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.93it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.33it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.09s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call__ prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_2200.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_2200.png Cleanup log parse. Steps: 100%|███████████████████████████████████████| 2400/2400 [18:22<00:00, 2.57it/s, loss=0.167, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.61it/s] Traceback (most recent call last): | 0/1 [00:00<?, ?it/s] File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\ui_functions.py", line 730, in start_training result = main(class_gen_method=class_gen_method) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1809, in main return inner_loop() File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\memory.py", line 126, in decorator return function(batch_size, grad_size, prof, args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1766, in inner_loop check_save(True) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 976, in check_save save_weights( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1100, in save_weights unet_lora_layers_to_save = unet_attn_processors_state_dict(unet) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\utils\model_utils.py", line 242, in unet_attn_processors_state_dict for parameter_key, parameter in attn_processor.state_dict().items(): AttributeError: 'AttnProcessor2_0' object has no attribute 'state_dict' Steps: 100%|███████████████████████████████████████| 2400/2400 [18:23<00:00, 2.17it/s, loss=0.167, lr=2e-6, vram=11.6] Duration: 00:18:46 Saving Lora Weights...: 0%| | 0/1 [00:00<?, ?it/s] Duration: 00:18:47
I have just solved it! Just unselect 'Calculate Split Loss' button on the testing tab. It will help. Thanks! After Training is done I see this: Exception training model: ''AttnProcessor2_0' object has no attribute 'state_dict''. And these are the console logs: Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1400.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1400.png Cleanup log parse. Steps: 67%|█████████████████████████▎ | 1600/2400 [13:03<04:59, 2.67it/s, loss=0.0113, lr=2e-6, vram=12.4]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.82it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.50it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.06s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1600.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1600.png Cleanup log parse. Steps: 75%|███████████████████████████▊ | 1800/2400 [14:25<03:44, 2.67it/s, loss=0.00502, lr=2e-6, vram=12.4]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 12.48it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.51it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.09s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call__ prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDAindex_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_1800.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_1800.png Cleanup log parse. Steps: 83%|███████████████████████████████▋ | 2000/2400 [15:43<02:20, 2.85it/s, loss=0.0187, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.04it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.57it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.02s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_2000.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_2000.png Cleanup log parse. Steps: 92%|██████████████████████████████████▊ | 2200/2400 [16:59<01:09, 2.86it/s, loss=0.0251, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.93it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.33it/s] Saving diffusion model: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.09s/it]WARNING:dreambooth.train_dreambooth:Exception saving sample. | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1281, in save_weights s_image = s_pipeline( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 635, in call__ prompt_embeds, negative_prompt_embeds = self.encode_prompt( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 338, in encode_prompt prompt_embeds = self.text_encoder( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 735, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 230, in forward inputs_embeds = self.token_embedding(input_ids) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\sparse.py", line 162, in forward return F.embedding( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) Model name: Marcy Log file updated, re-parsing: C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\dreambooth\events.out.tfevents.1696523103.DESKTOP-KK6I990.15716.0 C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\helpers\log_parser.py:324: FutureWarning: DataFrame.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead. all_df_loss = all_df_loss.fillna(method="ffill") Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\loss_plot_2200.png Saving C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\models\dreambooth\Marcy\logging\ram_plot_2200.png Cleanup log parse. Steps: 100%|███████████████████████████████████████| 2400/2400 [18:22<00:00, 2.57it/s, loss=0.167, lr=2e-6, vram=11.6]C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 11.61it/s] Traceback (most recent call last): | 0/1 [00:00<?, ?it/s] File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\ui_functions.py", line 730, in start_training result = main(class_gen_method=class_gen_method) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1809, in main return inner_loop() File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\memory.py", line 126, in decorator return function(batch_size, grad_size, prof, args, **kwargs) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1766, in inner_loop check_save(True) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 976, in check_save save_weights( File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\train_dreambooth.py", line 1100, in save_weights unet_lora_layers_to_save = unet_attn_processors_state_dict(unet) File "C:\Users\HOLO\Desktop\AAII\stable-diffusion-webui\extensions\sd_dreambooth_extension\dreambooth\utils\model_utils.py", line 242, in unet_attn_processors_state_dict for parameter_key, parameter in attn_processor.state_dict().items(): AttributeError: 'AttnProcessor2_0' object has no attribute 'state_dict' Steps: 100%|███████████████████████████████████████| 2400/2400 [18:23<00:00, 2.17it/s, loss=0.167, lr=2e-6, vram=11.6] Duration: 00:18:46 Saving Lora Weights...: 0%| | 0/1 [00:00<?, ?it/s] Duration: 00:18:47
I have just solved it! Just unselect 'Calculate Split Loss' button on the testing tab. It will help. Thanks!
This issue is stale because it has been open 5 days with no activity. Remove stale label or comment or this will be closed in 5 days
Is there an existing issue for this?
What happened?
Windows 11, Chrome browser I've tried many times and still the same, first I had a cuda problem it was saying it's not existing I re-cloned bitsandbytes and now the not-existing problem solved, but the local variable error still occurs
that's how it looks:
Steps to reproduce the problem
I create the model as in the screenshot, then I finish setting up training settings, then I generate class images, after that when I click on Train, this error appears
Commit and libraries
Initializing Dreambooth Dreambooth revision: 1a1d1621086a4725fda1200256f319c845dc7a8a Successfully installed accelerate-0.23.0 fastapi-0.94.1 transformers-4.32.1
[+] xformers version 0.0.22 installed. [+] torch version 2.0.1+cu118 installed. [+] torchvision version 0.15.2+cu118 installed. [+] accelerate version 0.23.0 installed. [+] diffusers version 0.21.4 installed. [+] transformers version 4.32.1 installed. [+] bitsandbytes version 0.41.1 installed. Launching Web UI with arguments: --xformers --autolaunch
Command Line Arguments
Console logs
Additional information
No response