Closed yozhikoff closed 3 years ago
Yes, it is correct and intended. Unlike typical resnet, DeepDanbooru has too high output dimension (about 8K) and it makes that normalized loss functions converge to very small value. To getting training stability and speed, DeepDanbooru needs high learning_rate and 5.0 is empirical value what I found. You can change it freely.
Thanks for the explanation! Quite surprising I should say.
FYI: I'm doing transfer learning on v3 on a custom 2 million images dataset with 6k tags and the results are incredible after the first epoch (6 hours on 3090). And that's even before backbone weights unfreezing. I'm very impressed!
I'm planning to unfreeze after 5 epochs, do you think it would enough? Also, another random question - is there a reason why you decided to train without a validation set? Just because of huge dataset size?
Interesting. If you have a plan to release your network, please reference DeepDanbooru :)
I'm planning to unfreeze after 5 epochs, do you think it would enough?
It is totally dependent on your dataset. But I think 5 epochs is a good starting point.
is there a reason why you decided to train without a validation set? Just because of huge dataset size?
Yes, the dataset is huge and making reliable validation set is another hard task, so I simply didn't.
DeepDanbooru is a great project, be sure that reference is in your pocket already!
And thanks for the help.
Also @Superfloh if you're working with this code too it would be interesting to collaborate!
I'm trying to use a selfmade dataset with 1.4 million images, which are already scaled to 512x512, and around 7600 tags. First I used the default learning rate of 0.001 and a batchsize of 5 since my rtx 3080 can't take more. After one epoch (about 24h training) the Loss was at about 350 and P=0.75.
Now I tried it with a learning rate of 5 for about 8 hours and the Loss stays at around 2000 with p=0.3, which didn't change at all since I started training.
I think I'll continue with lr=0.001 for now, since I can't figure out the problem.
One thing you can try is transfer learning. Take a pretrained model, remove the last layer and freeze the rest. Then put a new last layer and train updating its weights only. This procedure significantly decreases training time.
For me it worked just fine. Can't post loss values now, but the results are pretty impressive! I can share the code if you want @Superfloh
I probably will also try other learning rates later.
That would be nice, which pretrained model did you use? Also another question, did you remove unused tags from the database? I still have tags in the database that are not in the tags.txt
I use v3.
I made a custom tag list made only of tags from my database.
Could you drop me an email to yozhikoff [at] protonmail.com? Would be easier to share code there.
@KichangKim do you remember what Loss, precision, f1 and recall you had, when you trained the v3 model for 30 epochs ?
@KichangKim do you remember what Loss, precision, f1 and recall you had, when you trained the v3 model for 30 epochs ?
Unfortunately, I don't have any logs for previous models.
Hello @KichangKim !
I have a question about learning rate values used for trainig, especially for v3. As far as I understand the code
you set learning rate to 5.0 during the first epoch and don't change it anymore.
Do I understand it correclty? If yes, what is the reason for using such a high learning rate? Most of the ResNets I've seen were trained with lr=0.01 or lower.