facebookresearch / unbiased-teacher

PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection
https://arxiv.org/abs/2102.09480
MIT License
412 stars 82 forks source link

I want to make sure I'm right about what I know #28

Closed yeonsikch closed 3 years ago

yeonsikch commented 3 years ago

Hi, Thanks for your paper and code.

i want to make sure i'm right about what i know. plz, tell me answer about this questions.

  1. is it right that use 2k iters about all {1,5,10,20}% labels in burn-in stage?
  2. when i use batch size = (label = 12, unlabel = 12), it is batch size = 12(only labeled data) in Burn-in stage. Then, it is batch size = (12, 12)(labeled data, unlabeled data) in Student Model. is it right?
  3. In training log, i can see continuous two AP tables. what diff for two tables?

Thanks!

ycliu93 commented 3 years ago
  1. Yes, all degrees of supervision use 2k iterations for the burn-in stage.
  2. Yes, batch sizes of the labeled set are the same across the burn-in stage and mutual-learning stage and are set by SOLVER.IMG_PER_BATCH_LABEL.
  3. First AP table is for the student model, and the second AP table is for the teacher model.

Thanks

yeonsikch commented 3 years ago

Thanks for ur answer. So thanks.

may i some question to u?

then, when i want to supervised learning for {1%, 5%, 20% ... etc}learning, i have to use other code? not in this code? i think that fisrt AP block is supervised learning.

ycliu93 commented 3 years ago

The burn-in stage is basically the supervised learning of the student model, so you could extend the training iterations (from 3k to 90k/180k) of the burn-in stage for the supervised training (labeled data only).

yeonsikch commented 3 years ago

Thank u so much! This is very helpful to me. Finally i have a just one question.

While burn-in stage(=student model), this model = "normal Faster R-CNN". is right?

I want compare Supervised Learning and Semi-Supervised Learning.

Thanks.