Open tempyoutub opened 1 month ago
I have this same issue happening as well. Would like to know a fix to start training custom lora.
I have the same issue
I have the same issue. PC spec: 11700F on Z490 chipset, RTX 4070super, 12gb VRAM, 32gb RAM, Windows 11. Newest Nvidia drivers 556 (October 2024). Train script, train config and advanced settings are all set to default - no changes were made.
Problem may be related to memory managment. Training with 3 small jpgs is using up to 27,3gb from my 32gb RAM. VRAM peak usage is about 7,4gb.
Look what is happening to VRAM and RAM when I click "Start Training". https://vimeo.com/1025245371
I solved my problem: not enough free space for virtual memory. In first 60 seconds of training FLUX uses up to 50GB of memory. If you don't have such amount of physical RAM then virtual memory comes into play. Make sure you have enough free space in your disk C or create virtual memory on another drive. I set virtual memory on my drive D (50GB) and training was performed correctly.
I'm running on Windows 10, my FLUX and many other AI repos work flawlessly even the most error prone ones like Tortoise TTS however I can't fix an error while running FLUXGYM. The AI captions generate successfully, I'm running on an RTX 3060 with 12 GB VRAM and my PC has 32 GB RAM and I do select the 12 GB setting in VRAM selection toggle.
When I start training it gives an error within 30 seconds and generates this :
[2024-10-21 08:56:36] [INFO] Running S:\FluxGym\outputs\testlora123\train.bat [2024-10-21 08:56:36] [INFO] [2024-10-21 08:56:36] [INFO] (env) S:\FluxGym>accelerate launch --mixed_precision bf16 --num_cpu_threads_per_process 1 sd-scripts/flux_train_network.py --pretrained_model_name_or_path "S:\FluxGym\models\unet\flux1-dev.sft" --clip_l "S:\FluxGym\models\clip\clip_l.safetensors" --t5xxl "S:\FluxGym\models\clip\t5xxl_fp16.safetensors" --ae "S:\FluxGym\models\vae\ae.sft" --cache_latents_to_disk --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --seed 42 --gradient_checkpointing --mixed_precision bf16 --save_precision bf16 --network_module networks.lora_flux --network_dim 4 --optimizer_type adafactor --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False" --split_mode --network_args "train_blocks=single" --lr_scheduler constant_with_warmup --max_grad_norm 0.0 --learning_rate 8e-4 --cache_text_encoder_outputs --cache_text_encoder_outputs_to_disk --fp8_base --highvram --max_train_epochs 16 --save_every_n_epochs 4 --dataset_config "S:\FluxGym\outputs\testlora123\dataset.toml" --output_dir "S:\FluxGym\outputs\testlora123" --output_name testlora123 --timestep_sampling shift --discrete_flow_shift 3.1582 --model_prediction_type raw --guidance_scale 1 --loss_type l2 [2024-10-21 08:56:43] [INFO] The following values were not passed to
[2024-10-21 08:56:51] [INFO] trainer.train(args)
[2024-10-21 08:56:51] [INFO] File "S:\FluxGym\sd-scripts\train_network.py", line 268, in train
[2024-10-21 08:56:51] [INFO] tokenize_strategy = self.get_tokenize_strategy(args)
[2024-10-21 08:56:51] [INFO] File "S:\FluxGym\sd-scripts\flux_train_network.py", line 153, in get_tokenize_strategy
[2024-10-21 08:56:51] [INFO] return strategy_flux.FluxTokenizeStrategy(t5xxl_max_token_length, args.tokenizer_cache_dir)
[2024-10-21 08:56:51] [INFO] File "S:\FluxGym\sd-scripts\library\strategy_flux.py", line 27, in init
[2024-10-21 08:56:51] [INFO] self.t5xxl = self._load_tokenizer(T5TokenizerFast, T5_XXL_TOKENIZER_ID, tokenizer_cache_dir=tokenizer_cache_dir)
[2024-10-21 08:56:51] [INFO] File "S:\FluxGym\sd-scripts\library\strategy_base.py", line 65, in _load_tokenizer
[2024-10-21 08:56:51] [INFO] tokenizer = model_class.from_pretrained(model_id, subfolder=subfolder)
[2024-10-21 08:56:51] [INFO] File "S:\FluxGym\env\lib\site-packages\transformers\tokenization_utils_base.py", line 2271, in from_pretrained
[2024-10-21 08:56:51] [INFO] return cls._from_pretrained(
[2024-10-21 08:56:51] [INFO] File "S:\FluxGym\env\lib\site-packages\transformers\tokenization_utils_base.py", line 2309, in _from_pretrained
[2024-10-21 08:56:51] [INFO] slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained(
[2024-10-21 08:56:51] [INFO] File "S:\FluxGym\env\lib\site-packages\transformers\tokenization_utils_base.py", line 2440, in _from_pretrained
[2024-10-21 08:56:51] [INFO] special_tokens_map = json.load(special_tokens_map_handle)
[2024-10-21 08:56:51] [INFO] File "C:\Users\H67-Desktop\AppData\Local\Programs\Python\Python310\lib\json__init.py", line 293, in load
[2024-10-21 08:56:51] [INFO] return loads(fp.read(),
[2024-10-21 08:56:51] [INFO] File "C:\Users\H67-Desktop\AppData\Local\Programs\Python\Python310\lib\json__init__.py", line 346, in loads
[2024-10-21 08:56:51] [INFO] return _default_decoder.decode(s)
[2024-10-21 08:56:51] [INFO] File "C:\Users\H67-Desktop\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode
[2024-10-21 08:56:51] [INFO] obj, end = self.raw_decode(s, idx=_w(s, 0).end())
[2024-10-21 08:56:51] [INFO] File "C:\Users\H67-Desktop\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 355, in raw_decode
[2024-10-21 08:56:51] [INFO] raise JSONDecodeError("Expecting value", s, err.value) from None
[2024-10-21 08:56:51] [INFO] json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
[2024-10-21 08:56:52] [INFO] Traceback (most recent call last):
[2024-10-21 08:56:52] [INFO] File "C:\Users\H67-Desktop\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
[2024-10-21 08:56:52] [INFO] return _run_code(code, main_globals, None,
[2024-10-21 08:56:52] [INFO] File "C:\Users\H67-Desktop\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
[2024-10-21 08:56:52] [INFO] exec(code, run_globals)
[2024-10-21 08:56:52] [INFO] File "S:\FluxGym\env\Scripts\accelerate.exe\main__.py", line 7, in
[2024-10-21 08:56:52] [INFO] sys.exit(main())
[2024-10-21 08:56:52] [INFO] File "S:\FluxGym\env\lib\site-packages\accelerate\commands\accelerate_cli.py", line 48, in main
[2024-10-21 08:56:52] [INFO] args.func(args)
[2024-10-21 08:56:52] [INFO] File "S:\FluxGym\env\lib\site-packages\accelerate\commands\launch.py", line 1106, in launch_command
[2024-10-21 08:56:52] [INFO] simple_launcher(args)
[2024-10-21 08:56:52] [INFO] File "S:\FluxGym\env\lib\site-packages\accelerate\commands\launch.py", line 704, in simple_launcher
[2024-10-21 08:56:52] [INFO] raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
[2024-10-21 08:56:52] [INFO] subprocess.CalledProcessError: Command '['S:\FluxGym\env\Scripts\python.exe', 'sd-scripts/flux_train_network.py', '--pretrained_model_name_or_path', 'S:\FluxGym\models\unet\flux1-dev.sft', '--clip_l', 'S:\FluxGym\models\clip\clip_l.safetensors', '--t5xxl', 'S:\FluxGym\models\clip\t5xxl_fp16.safetensors', '--ae', 'S:\FluxGym\models\vae\ae.sft', '--cache_latents_to_disk', '--save_model_as', 'safetensors', '--sdpa', '--persistent_data_loader_workers', '--max_data_loader_n_workers', '2', '--seed', '42', '--gradient_checkpointing', '--mixed_precision', 'bf16', '--save_precision', 'bf16', '--network_module', 'networks.lora_flux', '--network_dim', '4', '--optimizer_type', 'adafactor', '--optimizer_args', 'relative_step=False', 'scale_parameter=False', 'warmup_init=False', '--split_mode', '--network_args', 'train_blocks=single', '--lr_scheduler', 'constant_with_warmup', '--max_grad_norm', '0.0', '--learning_rate', '8e-4', '--cache_text_encoder_outputs', '--cache_text_encoder_outputs_to_disk', '--fp8_base', '--highvram', '--max_train_epochs', '16', '--save_every_n_epochs', '4', '--dataset_config', 'S:\FluxGym\outputs\testlora123\dataset.toml', '--output_dir', 'S:\FluxGym\outputs\testlora123', '--output_name', 'testlora123', '--timestep_sampling', 'shift', '--discrete_flow_shift', '3.1582', '--model_prediction_type', 'raw', '--guidance_scale', '1', '--loss_type', 'l2']' returned non-zero exit status 1.
[2024-10-21 08:56:53] [ERROR] Command exited with code 1
[2024-10-21 08:56:53] [INFO] Runner:
accelerate launch
and had defaults used instead: [2024-10-21 08:56:43] [INFO]--num_processes
was set to a value of1
[2024-10-21 08:56:43] [INFO]--num_machines
was set to a value of1
[2024-10-21 08:56:43] [INFO]--dynamo_backend
was set to a value of'no'
[2024-10-21 08:56:43] [INFO] To avoid this warning pass in values for each of the problematic parameters or runaccelerate config
. [2024-10-21 08:56:50] [INFO] 2024-10-21 08:56:50 INFO highvram is enabled / train_util.py:4090 [2024-10-21 08:56:50] [INFO] highvramが有効です [2024-10-21 08:56:50] [INFO] WARNING cache_latents_to_disk is train_util.py:4110 [2024-10-21 08:56:50] [INFO] enabled, so cache_latents is [2024-10-21 08:56:50] [INFO] also enabled / [2024-10-21 08:56:50] [INFO] cache_latents_to_diskが有効なた [2024-10-21 08:56:50] [INFO] め、cache_latentsを有効にします [2024-10-21 08:56:50] [INFO] 2024-10-21 08:56:50 INFO Checking the state dict: flux_utils.py:62 [2024-10-21 08:56:50] [INFO] Diffusers or BFL, dev or schnell [2024-10-21 08:56:50] [INFO] INFO t5xxl_max_token_length: flux_train_network.py:152 [2024-10-21 08:56:50] [INFO] 512 [2024-10-21 08:56:51] [INFO] S:\FluxGym\env\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning:clean_up_tokenization_spaces
was not set. It will be set toTrue
by default. This behavior will be depracted in transformers v4.45, and will be then set toFalse
by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 [2024-10-21 08:56:51] [INFO] warnings.warn( [2024-10-21 08:56:51] [INFO] Traceback (most recent call last): [2024-10-21 08:56:51] [INFO] File "S:\FluxGym\sd-scripts\flux_train_network.py", line 519, inPlease can anyone help me fix this?