Im trying to train a lora model for a specific style and the training keeps ending after 5 seconds. The base model Im using is ponyXL v6, im on an rtx 3080 ti with 12gigs vram. I have a dataset of 111 images and no regs. I'm really new to this and kinda don't know what im doing, pls help.
14:08:55-412607 INFO Start training LoRA Standard ...
14:08:55-413607 INFO Validating lr scheduler arguments...
14:08:55-414607 INFO Validating optimizer arguments...
14:08:55-415107 INFO Validating E:/rokudenashimodel/log existence and writability... SUCCESS
14:08:55-416107 INFO Validating E:/rokudenashimodel/model existence and writability... SUCCESS
14:08:55-416607 ERROR Validating E:\rokudenashimodel\base model existence... FAILED: does not exist
14:08:55-417108 INFO Validating E:\rokudenashimodel\base model existence... SUCCESS
14:08:55-418111 INFO Validating E:/rokudenashimodel/img existence... SUCCESS
14:08:55-419113 INFO Folder 25_rodenashiart man: 25 repeats found
14:08:55-420113 INFO Folder 25_rodenashiart man: 113 images found
14:08:55-420613 INFO Folder 25_rodenashiart man: 113 * 25 = 2825 steps
14:08:55-421113 INFO Regulatization factor: 1
14:08:55-421613 INFO Total steps: 2825
14:08:55-422613 INFO Train batch size: 2
14:08:55-423113 INFO Gradient accumulation steps: 1
14:08:55-423614 INFO Epoch: 1
14:08:55-424614 INFO Max train steps: 8000
14:08:55-425113 INFO stop_text_encoder_training = 0
14:08:55-425613 INFO lr_warmup_steps = 800
14:08:55-427613 INFO Saving training config to E:/rokudenashimodel/model\rokudenashiart_20241004-140855.json...
14:08:55-428615 INFO Executing command: E:\kohya_ss\venv\Scripts\accelerate.EXE launch --dynamo_backend no
--dynamo_mode default --mixed_precision fp16 --num_processes 1 --num_machines 1
--num_cpu_threads_per_process 2 E:/kohya_ss/sd-scripts/train_network.py --config_file
E:/rokudenashimodel/model/config_lora-20241004-140855.toml
14:08:55-438116 INFO Command executed.
2024-10-04 14:09:03 INFO Loading settings from train_util.py:4174
E:/rokudenashimodel/model/config_lora-20241004-140855.toml...
INFO E:/rokudenashimodel/model/config_lora-20241004-140855 train_util.py:4193
2024-10-04 14:09:03 INFO prepare tokenizer train_util.py:4665
2024-10-04 14:09:04 INFO update token length: 75 train_util.py:4682
INFO Using DreamBooth method. train_network.py:172
Traceback (most recent call last):
File "E:\kohya_ss\sd-scripts\train_network.py", line 1242, in
trainer.train(args)
File "E:\kohya_ss\sd-scripts\train_network.py", line 198, in train
train_dataset_group = config_util.generate_dataset_group_by_blueprint(blueprint.dataset_group)
File "E:\kohya_ss\sd-scripts\library\config_util.py", line 487, in generate_dataset_group_by_blueprint
dataset = dataset_klass(subsets=subsets, **asdict(dataset_blueprint.params))
File "E:\kohya_ss\sd-scripts\library\train_util.py", line 1682, in init
max(resolution) <= max_bucket_reso
AssertionError: max_bucket_reso must be equal or greater than resolution / max_bucket_resoは最大解像度より小さくできません。解像度を小さくするかmin_bucket_resoを大きくしてください
Traceback (most recent call last):
File "C:\Users\xxxxxx\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\xxxxxx\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "E:\kohya_ss\venv\Scripts\accelerate.EXE__main__.py", line 7, in
File "E:\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main
args.func(args)
File "E:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1017, in launch_command
simple_launcher(args)
File "E:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 637, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['E:\kohya_ss\venv\Scripts\python.exe', 'E:/kohya_ss/sd-scripts/train_network.py', '--config_file', 'E:/rokudenashimodel/model/config_lora-20241004-140855.toml']' returned non-zero exit status 1.
14:09:06-652578 INFO Training has ended.
Im trying to train a lora model for a specific style and the training keeps ending after 5 seconds. The base model Im using is ponyXL v6, im on an rtx 3080 ti with 12gigs vram. I have a dataset of 111 images and no regs. I'm really new to this and kinda don't know what im doing, pls help.
14:08:55-412607 INFO Start training LoRA Standard ... 14:08:55-413607 INFO Validating lr scheduler arguments... 14:08:55-414607 INFO Validating optimizer arguments... 14:08:55-415107 INFO Validating E:/rokudenashimodel/log existence and writability... SUCCESS 14:08:55-416107 INFO Validating E:/rokudenashimodel/model existence and writability... SUCCESS 14:08:55-416607 ERROR Validating E:\rokudenashimodel\base model existence... FAILED: does not exist 14:08:55-417108 INFO Validating E:\rokudenashimodel\base model existence... SUCCESS 14:08:55-418111 INFO Validating E:/rokudenashimodel/img existence... SUCCESS 14:08:55-419113 INFO Folder 25_rodenashiart man: 25 repeats found 14:08:55-420113 INFO Folder 25_rodenashiart man: 113 images found 14:08:55-420613 INFO Folder 25_rodenashiart man: 113 * 25 = 2825 steps 14:08:55-421113 INFO Regulatization factor: 1 14:08:55-421613 INFO Total steps: 2825 14:08:55-422613 INFO Train batch size: 2 14:08:55-423113 INFO Gradient accumulation steps: 1 14:08:55-423614 INFO Epoch: 1 14:08:55-424614 INFO Max train steps: 8000 14:08:55-425113 INFO stop_text_encoder_training = 0 14:08:55-425613 INFO lr_warmup_steps = 800 14:08:55-427613 INFO Saving training config to E:/rokudenashimodel/model\rokudenashiart_20241004-140855.json... 14:08:55-428615 INFO Executing command: E:\kohya_ss\venv\Scripts\accelerate.EXE launch --dynamo_backend no --dynamo_mode default --mixed_precision fp16 --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 2 E:/kohya_ss/sd-scripts/train_network.py --config_file E:/rokudenashimodel/model/config_lora-20241004-140855.toml 14:08:55-438116 INFO Command executed. 2024-10-04 14:09:03 INFO Loading settings from train_util.py:4174 E:/rokudenashimodel/model/config_lora-20241004-140855.toml... INFO E:/rokudenashimodel/model/config_lora-20241004-140855 train_util.py:4193 2024-10-04 14:09:03 INFO prepare tokenizer train_util.py:4665 2024-10-04 14:09:04 INFO update token length: 75 train_util.py:4682 INFO Using DreamBooth method. train_network.py:172 Traceback (most recent call last): File "E:\kohya_ss\sd-scripts\train_network.py", line 1242, in
trainer.train(args)
File "E:\kohya_ss\sd-scripts\train_network.py", line 198, in train
train_dataset_group = config_util.generate_dataset_group_by_blueprint(blueprint.dataset_group)
File "E:\kohya_ss\sd-scripts\library\config_util.py", line 487, in generate_dataset_group_by_blueprint
dataset = dataset_klass(subsets=subsets, **asdict(dataset_blueprint.params))
File "E:\kohya_ss\sd-scripts\library\train_util.py", line 1682, in init
max(resolution) <= max_bucket_reso
AssertionError: max_bucket_reso must be equal or greater than resolution / max_bucket_resoは最大解像度より小さくできません。解像度を小さくするかmin_bucket_resoを大きくしてください
Traceback (most recent call last):
File "C:\Users\xxxxxx\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\xxxxxx\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "E:\kohya_ss\venv\Scripts\accelerate.EXE__main__.py", line 7, in
File "E:\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main
args.func(args)
File "E:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1017, in launch_command
simple_launcher(args)
File "E:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 637, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['E:\kohya_ss\venv\Scripts\python.exe', 'E:/kohya_ss/sd-scripts/train_network.py', '--config_file', 'E:/rokudenashimodel/model/config_lora-20241004-140855.toml']' returned non-zero exit status 1.
14:09:06-652578 INFO Training has ended.