Hi, I'm having trouble training a LyCORIS using the "full" algorithm. This is the error I get:
Traceback (most recent call last):
File "/content/kohya-trainer/train_network.py", line 1033, in <module>
trainer.train(args)
File "/content/kohya-trainer/train_network.py", line 849, in train
accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2040, in clip_grad_norm_
self.unscale_gradients()
File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 2003, in unscale_gradients
self.scaler.unscale_(opt)
File "/usr/local/lib/python3.10/dist-packages/torch/cuda/amp/grad_scaler.py", line 307, in unscale_
optimizer_state["found_inf_per_device"] = self._unscale_grads_(
File "/usr/local/lib/python3.10/dist-packages/torch/cuda/amp/grad_scaler.py", line 229, in _unscale_grads_
raise ValueError("Attempting to unscale FP16 gradients.")
ValueError: Attempting to unscale FP16 gradients.
I'm wondering if there is anything obviously wrong in my config file (below). It works fine with other algorithms such as lora and lokr which is strange.
Hi, I'm having trouble training a LyCORIS using the "full" algorithm. This is the error I get:
I'm wondering if there is anything obviously wrong in my config file (below). It works fine with other algorithms such as
lora
andlokr
which is strange.Extra Info
Let me know if you need any extra info, I'm quite new to training LoRAs / LyCORIS so I might be doing something silly