Open ircrp opened 3 weeks ago
That is not weird and it shouldn't have anything to do with fluxgym as fluxgym is just a wrapper for kohya sd-scripts.
What you are experiencing is most likely over training. If you over train a lora it will start to spit out similar images. Best case scenario, use the lora from step 500. If you want to have better results with your lora, you can increase the number of sample images and/or repeat trains per image. Usually, I would go with more images.
I’m encountering some strange behavior while training a LoRA model using FluxGym, and I’m curious if anyone else has seen something similar. In my training process, I generated samples at intervals (steps 250, 500, and 750) to check the model’s progression, and I’ve attached an image that illustrates this. Here’s the setup I used for the training prompts:
ADI4 as fireman --d 999
ADI4 as software engineer --d 999
ADI4 as president --d 999
ADI4 as teacher --d 999
At step 500, the outputs generally align with the specific professions in the prompts, showing elements unique to each role, like a suit for "president" or firefighter gear. However, by step 750, things get weird. All the generations start looking more like each other, particularly showing characteristics of the teacher role, especially the traditional clothing. This feels almost like the training is somehow being "overwritten" by previous samples.
Here’s a rough timeline of what I noticed:
Step 250: The generated samples are somewhat unique to each prompt but still rough.
Step 500: Outputs become clearer and some even begin aligning more closely with the trained man character
Step 750: Almost all samples look strikingly similar, with many reflecting the traditional attire seen in the "teacher" role from step 500 sample—even for prompts like "fireman" and "president," which shouldn’t be the case.
Questions: Has anyone experienced this type of "style bleed" before? Could it be that prior generated samples are somehow influencing the current ones?
Is there a known issue in FluxGym where training appears to converge too strongly toward one class or style over iterations?
Any suggestions on preventing this kind of merging of styles as training progresses?
Train Script:
Train Config: