Closed manuelpepe2024 closed 6 months ago
After an update to Kohya_ss a few days ago the trainer broke. The Creator is aware of the issue and is working on getting it fixed.
set the network alpha to 1 (for now) :)
DO NOT SET THE NETWORK ALPHA TO 1. You are not fixing the issue! Your LoRa will be poor quality unless you're Training something super generic like a ball.
Alpha Net should be set to at least half of the Net Rank. (i.e.: 32R-16A, 64R-32A, 128R-64A).
Whatever you're trying to train can wait a couple of days. Don't sacrifice Compute time for nothing. Just wait.
I feel like I need to prove a point here. So here is an example for those telling people to use Net Alpha 1. These are "Bare Generations". ( meaning LoRa::.7, Activator )
Here are the first 3 bare generations with a LoRa trained at Net Rank 32 Net Alpha 1:
Her are the first 3 bare generations of my first LoRa ever. It is, by far the worst quality LoRa I have ever made without trying. Net Rank 32 Net Alpha 16:
DON'T TRAIN AT NET ALPHA 1.
Just want to confirm my LoRa models are not working after training here. Never had issues before. So hopefully it get's fixed soon.
sorry for that but saddly not works ....and i dont understand why i use it in collab a few days ago and works amazing as always ....please if someone can tellme how to use old version while get fix ...please ...ayuda por favor