hollowstrawberry / kohya-colab

Accessible Google Colab notebooks for Stable Diffusion Lora training, based on the work of kohya-ss and Linaqruf
GNU General Public License v3.0
599 stars 87 forks source link

Loss value higher than before #107

Closed ddddfre closed 5 months ago

ddddfre commented 6 months ago

Thank you so much for fixing the great program quickly.

Thank you to the programmer all the time.

However, I would like to ask you a question because I have a question.

I proceeded with 14 repeat feet, 1 batch size, and 10 epochs.

"Loss" was 0.3 to 0.4 for 10 epochs, but is now 0.5 to 0.6 for 10 epochs.

There is a difference between old Lora and now Lora, perhaps because of 'loss'.

Has anything changed from this version?

It might sound weird because I'm speaking using a translator.

hollowstrawberry commented 6 months ago

How many images are you using with 14 repeats?

It may be related to #109, but I still can't find that problem.

ddddfre commented 5 months ago

This is not a fatal problem.

I think you can close it.

hollowstrawberry commented 5 months ago

Okay. If necessary you can continue discussion on that other issue.