Closed tetelias closed 5 years ago
Hi,
Thanks for pointing the README issue, I'll have a look.
The model usually works with small datasets (I managed to train one up to 1024 resolution with 3000 images) but indeed in this case you can expect the loss to act weird. There is no specifuc rule to get the perfect number of iteration. As a general rule, even if the loss is increasing, the more iteration you make the better the results.
I'm training my own dataset that's not very diverse so alpha gets down to zero rather quickly: say, 20k iterations when most default scale length values are 96k. And when alpha gets to zero, loss is starting to increase slowly. Looking at generated pictures they're still ok, but I wonder, if there's a rule of thumb on training length in such a case?
Also --Class was replaced with --Main, if I understood the code correctly, but --Class is still mentioned in Readme.