Closed Woisek closed 6 months ago
I searched every option, but I can't find anything that could cause this issue. Anyone smarter than me could point me into the right direction? Kayha is updated to the latest.
20:02:07-822738 INFO Start training LoRA Standard ... 20:02:07-823735 INFO Checking for duplicate image filenames in training data directory... 20:02:07-825730 INFO Valid image folder names found in: T:/AI_training/training/LoRA/EmilyHill_v6\img 20:02:07-826727 INFO Valid image folder names found in: T:/AI_training/training/LoRA/EmilyHill_v6\reg 20:02:07-827724 INFO Folder 70_Em1lyH1ll woman: 24 images found 20:02:07-828746 INFO Folder 70_Em1lyH1ll woman: 1680 steps 20:02:07-829749 WARNING Regularisation images are used... Will double the number of steps required... 20:02:07-830740 INFO Total steps: 1680 20:02:07-831742 INFO Train batch size: 1 20:02:07-832739 INFO Gradient accumulation steps: 1 20:02:07-833709 INFO Epoch: 1 20:02:07-833709 INFO Regulatization factor: 2 20:02:07-834734 INFO max_train_steps (1680 / 1 / 1 1 2) = 3360 20:02:07-835730 INFO stop_text_encoder_training = 0 20:02:07-837699 INFO lr_warmup_steps = 0 20:02:07-838723 INFO Saving training config to T:/AI_training/training/LoRA/EmilyHill_v6\model\Em1LyH1ll_v6.0_20240203-200207.json... 20:02:07-840718 INFO accelerate launch --num_cpu_threads_per_process=2 "./train_network.py" --enable_bucket --min_bucket_reso=256 --max_bucket_reso=2048 --pretrained_model_name_orpath="V:/SD1.5-pruned-emaonly(3.97GB).safetensors" --train_data_dir="T:/AI_training/training/LoRA/EmilyHill_v6\img" --reg_data_dir="T:/AI_training/training/LoRA/EmilyHill_v6\reg" --resolution="512,512" --output_dir="T:/AI_training/training/LoRA/EmilyHill_v6\model" --logging_dir="T:/AI_training/training/LoRA/EmilyHill_v6\log" --network_alpha="128" --save_model_as=safetensors --network_module=networks.lora --text_encoder_lr=5e-05 --unet_lr=0.0001 --network_dim=128 --output_name="Em1LyH1ll_v6.0" --lr_scheduler_num_cycles="1" --learning_rate="0.0001" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="3360" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --seed="1234" --caption_extension=".txt" --cache_latents --optimizer_type="AdamW8bit" --max_grad_norm="1" --max_data_loader_n_workers="1" --clip_skip=2 --bucket_reso_steps=64 --mem_eff_attn --xformers --bucket_no_upscale --noise_offset=0.05 A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' prepare tokenizer Using DreamBooth method. prepare images. found directory T:\AI_training\training\LoRA\EmilyHill_v6\img\70_Em1lyH1ll woman contains 24 image files found directory T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman contains 4420 image files No caption file found for 4420 images. Training will continue without captions for these images. If class token exists, it will be used. / 4420枚の画像にキャプションファイルが見つかりませんでした。これらの画像についてはキャプションなしで学 習を続行します。class tokenが存在する場合はそれを使います。 T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0001.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0002.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0003.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0004.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0005.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0006.png... and 4415 more 1680 train images with repeating. 4420 reg images. some of reg images are not used / 正則化画像の数が多いので、一部使用されない正則化画像があります [Dataset 0] batch_size: 1 resolution: (512, 512) enable_bucket: True network_multiplier: 1.0 min_bucket_reso: 256 max_bucket_reso: 2048 bucket_reso_steps: 64 bucket_no_upscale: True
[Subset 0 of Dataset 0] image_dir: "T:\AI_training\training\LoRA\EmilyHill_v6\img\70_Em1lyH1ll woman" image_count: 24 num_repeats: 70 shuffle_caption: False keep_tokens: 0 keep_tokens_separator: caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: False class_tokens: Em1lyH1ll woman caption_extension: .txt
[Subset 1 of Dataset 0] image_dir: "T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman" image_count: 4420 num_repeats: 1 shuffle_caption: False keep_tokens: 0 keep_tokens_separator: caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: True class_tokens: woman caption_extension: .txt
[Dataset 0] loading image sizes. 100%|███████████████████████████████████████████████████████████████████████████| 1704/1704 [00:00<00:00, 11466.80it/s] make buckets min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucketresoは無視されます number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (384, 512), count: 280 bucket 1: resolution (384, 576), count: 280 bucket 2: resolution (384, 640), count: 630 bucket 3: resolution (448, 448), count: 140 bucket 4: resolution (448, 512), count: 280 bucket 5: resolution (512, 448), count: 70 bucket 6: resolution (512, 512), count: 1680 mean ar error (without repeats): 0.00045620199590860633 preparing accelerator loading model for process 0/1 load StableDiffusion checkpoint: V:/SD1.5-pruned-emaonly(3.97GB).safetensors UNet2DConditionModel: 64, 8, 768, False, False loading u-net: loading vae: Traceback (most recent call last): File "V:\AI_programms\kohya_ss\train_network.py", line 1033, in trainer.train(args) File "V:\AI_programms\kohya_ss\train_network.py", line 229, in train model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator) File "V:\AI_programms\kohya_ss\train_network.py", line 98, in load_target_model textencoder, vae, unet, = train_util.load_target_model(args, weight_dtype, accelerator) File "V:\AI_programms\kohya_ss\library\train_util.py", line 3996, in load_target_model text_encoder, vae, unet, load_stable_diffusion_format = _load_target_model( File "V:\AI_programms\kohya_ss\library\train_util.py", line 3950, in _load_target_model text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint( File "V:\AI_programms\kohya_ss\library\model_util.py", line 1069, in load_models_from_stable_diffusion_checkpoint info = text_model.load_state_dict(converted_text_encoder_checkpoint) File "V:\AI_programms\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for CLIPTextModel: Missing key(s) in state_dict: "text_model.embeddings.position_ids". Traceback (most recent call last): File "F:\Programme\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "F:\Programme\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "V:\AI_programms\kohya_ss\venv\Scripts\accelerate.exemain.py", line 7, in File "V:\AI_programms\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main args.func(args) File "V:\AI_programms\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1017, in launch_command simple_launcher(args) File "V:\AI_programms\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 637, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['V:\AI_programms\kohya_ss\venv\Scripts\python.exe', './train_network.py', '--enable_bucket', '--min_bucket_reso=256', '--max_bucket_reso=2048', '--pretrained_model_name_orpath=V:/SD1.5-pruned-emaonly(3.97GB).safetensors', '--train_data_dir=T:/AI_training/training/LoRA/EmilyHill_v6\img', '--reg_data_dir=T:/AI_training/training/LoRA/EmilyHill_v6\reg', '--resolution=512,512', '--output_dir=T:/AI_training/training/LoRA/EmilyHill_v6\model', '--logging_dir=T:/AI_training/training/LoRA/EmilyHill_v6\log', '--network_alpha=128', '--save_model_as=safetensors', '--network_module=networks.lora', '--text_encoder_lr=5e-05', '--unet_lr=0.0001', '--network_dim=128', '--output_name=Em1LyH1ll_v6.0', '--lr_scheduler_num_cycles=1', '--learning_rate=0.0001', '--lr_scheduler=constant', '--train_batch_size=1', '--max_train_steps=3360', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=1234', '--caption_extension=.txt', '--cache_latents', '--optimizer_type=AdamW8bit', '--max_grad_norm=1', '--max_data_loader_n_workers=1', '--clip_skip=2', '--bucket_reso_steps=64', '--mem_eff_attn', '--xformers', '--bucket_no_upscale', '--noise_offset=0.05']' returned non-zero exit status 1.
Thanks in advance.
I have exactly the same problem....
`17:49:08-670078 INFO Version: v22.6.0
17:49:08-677073 INFO nVidia toolkit detected 17:49:09-938431 INFO Torch 2.1.2+cu118 17:49:09-950716 INFO Torch backend: nVidia CUDA 11.8 cuDNN 8700 17:49:09-951684 INFO Torch detected GPU: NVIDIA GeForce RTX 3070 Ti VRAM 8192 Arch (8, 6) Cores 48 17:49:09-953415 INFO Verifying modules installation status from requirements_windows_torch2.txt... 17:49:09-955048 INFO Installing package: torch==2.1.2+cu118 torchvision==0.16.2+cu118 torchaudio==2.1.2+cu118 --index-url https://download.pytorch.org/whl/cu118 17:49:13-112668 INFO Installing package: xformers==0.0.23.post1+cu118 --index-url https://download.pytorch.org/whl/cu118 17:49:14-977479 INFO Verifying modules installation status from requirements.txt... 17:49:14-980995 WARNING Package wrong version: gradio 3.36.1 required 3.41.2 17:49:14-982273 INFO Installing package: gradio==3.41.2 17:49:41-688306 INFO headless: False 17:49:41-694341 INFO Load CSS... Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
18:00:33-987459 INFO Start training TI...
18:02:06-815645 INFO Start training TI...
18:02:06-817646 INFO Valid image folder names found in: E:\Images\AI\Training\Characters\Roji_Clover\images
18:02:06-818646 INFO Folder 25_Roji_Clover: 3750 steps
18:02:06-819646 INFO max_train_steps = 5000
18:02:06-820645 INFO stop_text_encoder_training = 0
18:02:06-821645 INFO lr_warmup_steps = 500
18:02:06-822645 INFO Saving training config to
E:/Images/AI/Training/Characters/Roji_Clover/model\last_20240208-180206.json...
18:02:06-823646 INFO accelerate launch --num_cpu_threads_per_process=2 "./train_textual_inversion.py"
--enable_bucket --min_bucket_reso=256 --max_bucket_reso=2048
--pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5"
--train_data_dir="E:\Images\AI\Training\Characters\Roji_Clover\images" --resolution="768,768"
--output_dir="E:/Images/AI/Training/Characters/Roji_Clover/model"
--logging_dir="E:/Images/AI/Training/Characters/Roji_Clover/logs" --save_model_as=safetensors
--output_name="last" --lr_scheduler_num_cycles="10" --max_data_loader_n_workers="0"
--lr_scheduler="cosine" --lr_warmup_steps="500" --train_batch_size="1" --max_train_steps="5000"
--save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16"
--caption_extension=".txt" --cache_latents --optimizer_type="AdamW8bit"
--max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale
--noise_offset=0.0 --token_string="woman" --init_word="Roji_Clover" --num_vectors_per_token=12
--weights="E:/Images/AI/Training/Characters/Roji_Clover/model/Roji_Clover_main.json"
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
prepare tokenizer
prepare accelerator
loading model for process 0/1
load Diffusers pretrained models: runwayml/stable-diffusion-v1-5
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:00<00:00, 11.31it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
UNet2DConditionModel: 64, 8, 768, False, False
U-Net converted to original U-Net
token length for init words is not same to num_vectors_per_token, init words is repeated or truncated / 初期化単語のトークン長がnum_vectors_per_tokenと合わないため、繰り返しまたは切り捨てが発生します: tokenizer 1, length 4
Traceback (most recent call last):
File "F:\kohya_ss\train_textual_inversion.py", line 797, in
same with AdamW8bit. Annoying :-/
I searched every option, but I can't find anything that could cause this issue. Anyone smarter than me could point me into the right direction? Kayha is updated to the latest.
20:02:07-822738 INFO Start training LoRA Standard ... 20:02:07-823735 INFO Checking for duplicate image filenames in training data directory... 20:02:07-825730 INFO Valid image folder names found in: T:/AI_training/training/LoRA/EmilyHill_v6\img 20:02:07-826727 INFO Valid image folder names found in: T:/AI_training/training/LoRA/EmilyHill_v6\reg 20:02:07-827724 INFO Folder 70_Em1lyH1ll woman: 24 images found 20:02:07-828746 INFO Folder 70_Em1lyH1ll woman: 1680 steps 20:02:07-829749 WARNING Regularisation images are used... Will double the number of steps required... 20:02:07-830740 INFO Total steps: 1680 20:02:07-831742 INFO Train batch size: 1 20:02:07-832739 INFO Gradient accumulation steps: 1 20:02:07-833709 INFO Epoch: 1 20:02:07-833709 INFO Regulatization factor: 2 20:02:07-834734 INFO max_train_steps (1680 / 1 / 1 1 2) = 3360 20:02:07-835730 INFO stop_text_encoder_training = 0 20:02:07-837699 INFO lr_warmup_steps = 0 20:02:07-838723 INFO Saving training config to T:/AI_training/training/LoRA/EmilyHill_v6\model\Em1LyH1ll_v6.0_20240203-200207.json... 20:02:07-840718 INFO accelerate launch --num_cpu_threads_per_process=2 "./train_network.py" --enable_bucket --min_bucket_reso=256 --max_bucket_reso=2048 --pretrained_model_name_orpath="V:/SD1.5-pruned-emaonly(3.97GB).safetensors" --train_data_dir="T:/AI_training/training/LoRA/EmilyHill_v6\img" --reg_data_dir="T:/AI_training/training/LoRA/EmilyHill_v6\reg" --resolution="512,512" --output_dir="T:/AI_training/training/LoRA/EmilyHill_v6\model" --logging_dir="T:/AI_training/training/LoRA/EmilyHill_v6\log" --network_alpha="128" --save_model_as=safetensors --network_module=networks.lora --text_encoder_lr=5e-05 --unet_lr=0.0001 --network_dim=128 --output_name="Em1LyH1ll_v6.0" --lr_scheduler_num_cycles="1" --learning_rate="0.0001" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="3360" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --seed="1234" --caption_extension=".txt" --cache_latents --optimizer_type="AdamW8bit" --max_grad_norm="1" --max_data_loader_n_workers="1" --clip_skip=2 --bucket_reso_steps=64 --mem_eff_attn --xformers --bucket_no_upscale --noise_offset=0.05 A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' prepare tokenizer Using DreamBooth method. prepare images. found directory T:\AI_training\training\LoRA\EmilyHill_v6\img\70_Em1lyH1ll woman contains 24 image files found directory T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman contains 4420 image files No caption file found for 4420 images. Training will continue without captions for these images. If class token exists, it will be used. / 4420枚の画像にキャプションファイルが見つかりませんでした。これらの画像についてはキャプションなしで学 習を続行します。class tokenが存在する場合はそれを使います。 T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0001.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0002.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0003.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0004.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0005.png T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman\reg_woman_0006.png... and 4415 more 1680 train images with repeating. 4420 reg images. some of reg images are not used / 正則化画像の数が多いので、一部使用されない正則化画像があります [Dataset 0] batch_size: 1 resolution: (512, 512) enable_bucket: True network_multiplier: 1.0 min_bucket_reso: 256 max_bucket_reso: 2048 bucket_reso_steps: 64 bucket_no_upscale: True
[Subset 0 of Dataset 0] image_dir: "T:\AI_training\training\LoRA\EmilyHill_v6\img\70_Em1lyH1ll woman" image_count: 24 num_repeats: 70 shuffle_caption: False keep_tokens: 0 keep_tokens_separator: caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: False class_tokens: Em1lyH1ll woman caption_extension: .txt
[Subset 1 of Dataset 0] image_dir: "T:\AI_training\training\LoRA\EmilyHill_v6\reg\1_woman" image_count: 4420 num_repeats: 1 shuffle_caption: False keep_tokens: 0 keep_tokens_separator: caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: True class_tokens: woman caption_extension: .txt
[Dataset 0] loading image sizes. 100%|███████████████████████████████████████████████████████████████████████████| 1704/1704 [00:00<00:00, 11466.80it/s] make buckets min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucketresoは無視されます number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (384, 512), count: 280 bucket 1: resolution (384, 576), count: 280 bucket 2: resolution (384, 640), count: 630 bucket 3: resolution (448, 448), count: 140 bucket 4: resolution (448, 512), count: 280 bucket 5: resolution (512, 448), count: 70 bucket 6: resolution (512, 512), count: 1680 mean ar error (without repeats): 0.00045620199590860633 preparing accelerator loading model for process 0/1 load StableDiffusion checkpoint: V:/SD1.5-pruned-emaonly(3.97GB).safetensors UNet2DConditionModel: 64, 8, 768, False, False loading u-net:
loading vae:
Traceback (most recent call last):
File "V:\AI_programms\kohya_ss\train_network.py", line 1033, in
trainer.train(args)
File "V:\AI_programms\kohya_ss\train_network.py", line 229, in train
model_version, text_encoder, vae, unet = self.load_target_model(args, weight_dtype, accelerator)
File "V:\AI_programms\kohya_ss\train_network.py", line 98, in load_target_model
textencoder, vae, unet, = train_util.load_target_model(args, weight_dtype, accelerator)
File "V:\AI_programms\kohya_ss\library\train_util.py", line 3996, in load_target_model
text_encoder, vae, unet, load_stable_diffusion_format = _load_target_model(
File "V:\AI_programms\kohya_ss\library\train_util.py", line 3950, in _load_target_model
text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint(
File "V:\AI_programms\kohya_ss\library\model_util.py", line 1069, in load_models_from_stable_diffusion_checkpoint
info = text_model.load_state_dict(converted_text_encoder_checkpoint)
File "V:\AI_programms\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
Missing key(s) in state_dict: "text_model.embeddings.position_ids".
Traceback (most recent call last):
File "F:\Programme\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "F:\Programme\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "V:\AI_programms\kohya_ss\venv\Scripts\accelerate.exe__main__.py", line 7, in
File "V:\AI_programms\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main
args.func(args)
File "V:\AI_programms\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1017, in launch_command
simple_launcher(args)
File "V:\AI_programms\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 637, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['V:\AI_programms\kohya_ss\venv\Scripts\python.exe', './train_network.py', '--enable_bucket', '--min_bucket_reso=256', '--max_bucket_reso=2048', '--pretrained_model_name_orpath=V:/SD1.5-pruned-emaonly(3.97GB).safetensors', '--train_data_dir=T:/AI_training/training/LoRA/EmilyHill_v6\img', '--reg_data_dir=T:/AI_training/training/LoRA/EmilyHill_v6\reg', '--resolution=512,512', '--output_dir=T:/AI_training/training/LoRA/EmilyHill_v6\model', '--logging_dir=T:/AI_training/training/LoRA/EmilyHill_v6\log', '--network_alpha=128', '--save_model_as=safetensors', '--network_module=networks.lora', '--text_encoder_lr=5e-05', '--unet_lr=0.0001', '--network_dim=128', '--output_name=Em1LyH1ll_v6.0', '--lr_scheduler_num_cycles=1', '--learning_rate=0.0001', '--lr_scheduler=constant', '--train_batch_size=1', '--max_train_steps=3360', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=1234', '--caption_extension=.txt', '--cache_latents', '--optimizer_type=AdamW8bit', '--max_grad_norm=1', '--max_data_loader_n_workers=1', '--clip_skip=2', '--bucket_reso_steps=64', '--mem_eff_attn', '--xformers', '--bucket_no_upscale', '--noise_offset=0.05']' returned non-zero exit status 1.
Thanks in advance.