cocktailpeanut / fluxgym

Dead simple FLUX LoRA training UI with LOW VRAM support
MIT License
1.4k stars 121 forks source link

[Feature Request]: Remove the hard coded batch size & add it to the advanced options #181

Open Vigilence opened 1 month ago

Vigilence commented 1 month ago

If the hard coded batch size can instead be edited in the advanced config that would be great. I figured out, after some testing that the batch size is hard coded in app.py file. I had initial tried changing the batch size via the advanced commands, but noticed it didn't increase vram usage or change the speed of the training.

So to actually change the batch size I:

Screenshot 2024-10-07 002721 Screenshot 2024-10-07 121154 Screenshot 2024-10-07 121238 Screenshot 2024-10-07 121700 Screenshot 2024-10-07 121713

MLGODFATHER commented 1 month ago

Thanks for the information, this should definitely be updated so we can all edit the batch size.

ClothingAI commented 1 month ago

Same for number of CPUs no? @Vigilence

Vigilence commented 1 month ago

Same for number of CPUs no? @Vigilence

Yes, that’s true.