KichangKim / DeepDanbooru

AI based multi-label girl image classification system, implemented by using TensorFlow.
MIT License
2.65k stars 260 forks source link

Choosing the learning rate #34

Closed yozhikoff closed 3 years ago

yozhikoff commented 3 years ago

Hello @KichangKim !

I have a question about learning rate values used for trainig, especially for v3. As far as I understand the code

"learning_rates": [
  {
      "used_epoch": 0,
      "learning_rate": 5.0
  }
]
# Udpate learning rate
for learning_rate_per_epoch in learning_rates:
    if learning_rate_per_epoch['used_epoch'] <= int(used_epoch):
        learning_rate = learning_rate_per_epoch['learning_rate']

you set learning rate to 5.0 during the first epoch and don't change it anymore.

Do I understand it correclty? If yes, what is the reason for using such a high learning rate? Most of the ResNets I've seen were trained with lr=0.01 or lower.

KichangKim commented 3 years ago

Yes, it is correct and intended. Unlike typical resnet, DeepDanbooru has too high output dimension (about 8K) and it makes that normalized loss functions converge to very small value. To getting training stability and speed, DeepDanbooru needs high learning_rate and 5.0 is empirical value what I found. You can change it freely.

yozhikoff commented 3 years ago

Thanks for the explanation! Quite surprising I should say.

FYI: I'm doing transfer learning on v3 on a custom 2 million images dataset with 6k tags and the results are incredible after the first epoch (6 hours on 3090). And that's even before backbone weights unfreezing. I'm very impressed!

I'm planning to unfreeze after 5 epochs, do you think it would enough? Also, another random question - is there a reason why you decided to train without a validation set? Just because of huge dataset size?

KichangKim commented 3 years ago

Interesting. If you have a plan to release your network, please reference DeepDanbooru :)

I'm planning to unfreeze after 5 epochs, do you think it would enough?

It is totally dependent on your dataset. But I think 5 epochs is a good starting point.

is there a reason why you decided to train without a validation set? Just because of huge dataset size?

Yes, the dataset is huge and making reliable validation set is another hard task, so I simply didn't.

yozhikoff commented 3 years ago

DeepDanbooru is a great project, be sure that reference is in your pocket already!

And thanks for the help.

yozhikoff commented 3 years ago

Also @Superfloh if you're working with this code too it would be interesting to collaborate!

Superfloh commented 3 years ago

I'm trying to use a selfmade dataset with 1.4 million images, which are already scaled to 512x512, and around 7600 tags. First I used the default learning rate of 0.001 and a batchsize of 5 since my rtx 3080 can't take more. After one epoch (about 24h training) the Loss was at about 350 and P=0.75.

Now I tried it with a learning rate of 5 for about 8 hours and the Loss stays at around 2000 with p=0.3, which didn't change at all since I started training.

I think I'll continue with lr=0.001 for now, since I can't figure out the problem.

yozhikoff commented 3 years ago

One thing you can try is transfer learning. Take a pretrained model, remove the last layer and freeze the rest. Then put a new last layer and train updating its weights only. This procedure significantly decreases training time.

For me it worked just fine. Can't post loss values now, but the results are pretty impressive! I can share the code if you want @Superfloh

I probably will also try other learning rates later.

Superfloh commented 3 years ago

That would be nice, which pretrained model did you use? Also another question, did you remove unused tags from the database? I still have tags in the database that are not in the tags.txt

yozhikoff commented 3 years ago

I use v3.

I made a custom tag list made only of tags from my database.

Could you drop me an email to yozhikoff [at] protonmail.com? Would be easier to share code there.

Superfloh commented 3 years ago

@KichangKim do you remember what Loss, precision, f1 and recall you had, when you trained the v3 model for 30 epochs ?

KichangKim commented 3 years ago

@KichangKim do you remember what Loss, precision, f1 and recall you had, when you trained the v3 model for 30 epochs ?

Unfortunately, I don't have any logs for previous models.