Megvii-BaseDetection / DenseTeacher

DenseTeacher: Dense Pseudo-Label for Semi-supervised Object Detection
Apache License 2.0
119 stars 12 forks source link

need for a log file. #9

Closed buluofeng2 closed 2 years ago

buluofeng2 commented 2 years ago

There is some trouble to install cvpods and i'm hoping to get a log file for better understanding of the training process. So I‘d like to get a log file If it‘s convenient for you.

ZRandomize commented 2 years ago

you are welcome example_log.log

buluofeng2 commented 2 years ago

okay, thank you!

buluofeng2 commented 2 years ago

I have try this way in another detection framework, and i finish the four part of your code.

  1. the data augmentation (4 img per batch for supervised branch and 2 img per batch for both teacher and student branch)
  2. EMA, which start from 3000 iteration, with momentum=0.9996
  3. burnin stategy,which cost 5000 iterations。
  4. distill_loss for unsupervised branch.
  5. the lr is warmup in first 1000 iters by linear to 0.01, and lr is down to 0.001 in last epoch(iter 216200). But in my 10% data experiments, i can only get the 31% mAP, which way during iter 66000, and the mAP doesn't increase from iter 66000. I wonder if there are any other key factors that might be causing the accuracy to fail to improve. Or can i get a complete log of the 10%-data experiment, so that i can find the problem by myself.
ZRandomize commented 2 years ago

you can check the full config at the line 374 in the logfile. seem like we used equal number of sup and unsup images for each epoch, is that the problem? we use unsup weight 4 for logits and 1 for deltas, is that the same?

the full log and performance curve is here full_log.log image

BTW, I'm wondering the performance of your baseline

buluofeng2 commented 2 years ago

Thanks you so much.