invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.7k stars 2.43k forks source link

[bug]: textual inversion error #7088

Closed MrFries1111 closed 1 month ago

MrFries1111 commented 1 month ago

Is there an existing issue for this problem?

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

4080

GPU VRAM

16

Version number

5.0.0

Browser

google

{'model': 'sdxl/main/WildCardX-XL_v4', 'resolution': 1024, 'lr_scheduler': 'constant', 'mixed_precision': 'fp16', 'learnable_property': 'style', 'initializer_token': '★', 'placeholder_token': '', 'train_data_dir': 'D:\invoke\text-inversion-training-data\curtainbangs', 'output_dir': 'D:\invoke\text-inversion-output\curtainbangs', 'scale_lr': True, 'center_crop': False, 'enable_xformers_memory_efficient_attention': False, 'train_batch_size': 8, 'gradient_accumulation_steps': 4, 'num_train_epochs': 100, 'max_train_steps': 3000, 'lr_warmup_steps': 0, 'learning_rate': 0.0005, 'resume_from_checkpoint': 'latest', 'only_save_embeds': True} 10/10/2024 13:50:40 - INFO - invokeai.backend.training.textual_inversion_training - Distributed environment: NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda

Mixed precision type: fp16

[2024-10-10 13:50:40,998]::[InvokeAI]::INFO --> Initializing database at D:\invoke\databases\invokeai.db 10/10/2024 13:50:40 - INFO - InvokeAI - Initializing database at D:\invoke\databases\invokeai.db {'dynamic_thresholding_ratio', 'variance_type', 'clip_sample_range', 'thresholding'} was not found in config. Values will be initialized to default values. 10/10/2024 13:50:44 - INFO - invokeai.backend.training.textual_inversion_training - Running training 10/10/2024 13:50:44 - INFO - invokeai.backend.training.textual_inversion_training - Num examples = 2000 10/10/2024 13:50:44 - INFO - invokeai.backend.training.textual_inversion_training - Num Epochs = 48 10/10/2024 13:50:44 - INFO - invokeai.backend.training.textual_inversion_training - Instantaneous batch size per device = 8 10/10/2024 13:50:44 - INFO - invokeai.backend.training.textual_inversion_training - Total train batch size (w. parallel, distributed & accumulation) = 32 10/10/2024 13:50:44 - INFO - invokeai.backend.training.textual_inversion_training - Gradient Accumulation steps = 4 10/10/2024 13:50:44 - INFO - invokeai.backend.training.textual_inversion_training - Total optimization steps = 3000 Checkpoint 'latest' does not exist. Starting a new training run. Steps: 0%| | 0/3000 [00:00<?, ?it/s][2024-10-10 _13:51:04,033]::****[InvokeAI]::ERROR --> An exception occurred during training. The exception was:**** 10/10/2024 13:51:04 - ERROR - InvokeAI - An exception occurred during training. The exception was: [2024-10-10 13:51:04,034]::[InvokeAI]::ERROR --> argument of type 'NoneType' is not iterable 10/10/2024 13:51:04 - ERROR - InvokeAI - argument of type 'NoneType' is not iterable [2024-10-10 13:51:04,034]::[InvokeAI]::ERROR --> DETAILS: 10/10/2024 13:51:04 - ERROR - InvokeAI - DETAILS: [2024-10-10 13:51:04,034]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "D:\invoke.venv\lib\site-packages\invokeai\frontend\training\textual_inversion.py", line 413, in do_front_end do_textual_inversion_training(config, my_args) File "D:\invoke.venv\lib\site-packages\invokeai\backend\training\textual_inversion_training.py", line 840, in do_textual_inversion_training model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample File "D:\invoke.venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\invoke.venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "D:\invoke.venv\lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1013, in forward if "text_embeds" not in added_condkwargs: TypeError: argument of type 'NoneType' is not iterable

10/10/2024 13:51:04 - ERROR - InvokeAI - Traceback (most recent call last): File "D:\invoke.venv\lib\site-packages\invokeai\frontend\training\textual_inversion.py", line 413, in do_front_end do_textual_inversion_training(config, my_args) File "D:\invoke.venv\lib\site-packages\invokeai\backend\training\textual_inversion_training.py", line 840, in do_textual_inversion_training model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample File "D:\invoke.venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\invoke.venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "D:\invoke.venv\lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1013, in forward if "text_embeds" not in added_cond_kwargs: TypeError: argument of type 'NoneType' is not iterable

What happened

while doing SDXL textual inversion training , this error happens

What you expected to happen

how to solve this guys

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

psychedelicious commented 1 month ago

I don't think you are actually using v5 because it does not do training. In fact there is no training code at all. We moved training to a separate repo a long time ago: https://github.com/invoke-ai/invoke-training

Please use that project for TI training. If you have an problem, please create an issue on that repo.

MrFries1111 commented 1 month ago

I don't think you are actually using v5 because it does not do training. In fact there is no training code at all. We moved training to a separate repo a long time ago: https://github.com/invoke-ai/invoke-training

Please use that project for TI training. If you have an problem, please create an issue on that repo.

thanks i just notice that you have a separated repo