facebookresearch / adaptive_teacher

This repo provides the source code for "Cross-Domain Adaptive Teacher for Object Detection".
Other
180 stars 35 forks source link

Optimal number of learning iterations #26

Open miurakenichi opened 2 years ago

miurakenichi commented 2 years ago

I have a question about the paper. Looking at Figure 4, the score rises when the iteration is between 10k and 20k, but The score hasn't increased much since then. In this case, is it enough for learning to have about 20K iterations?

fig4

yujheli commented 2 years ago

Yes, in my case I 30k would be sufficient for our experimental dataset. This may change for other customized datasets.

miurakenichi commented 2 years ago

Thank you for your response. I have another question related to this. The paper does not describe the loss trends, but how did the losses behave during the mutual learning? At the time of my follow-up test, it looked like the graph below. Is it as expected? (In this case, mutual learning starts at 80k.)

image

score: image

yujheli commented 2 years ago

@miurakenichi Yes, I think so. Is this the performance on clipart1k?

miurakenichi commented 2 years ago

@yujheli Yes, It's on clipart1k.

yujheli commented 2 years ago

@miurakenichi Looks like your reproduced results match the scores I have in the internal FB code. I am also figuring out the NAN issue which is happening in the local code but does not happen to the internal code at: https://github.com/facebookresearch/adaptive_teacher/tree/main/prod_lib

Could you share your config or modification of the current github code so I may have better understanding where I should modify to match the results using the internal FB code.

miurakenichi commented 2 years ago

@yujheli The code has not been modified. I used the code from commit cba3c59cadfc9f1a3a676a82bf63d76579ab552b. The config is as follows:

_BASE_: "./Base-RCNN-C4.yaml"
MODEL:
  META_ARCHITECTURE: "DAobjTwoStagePseudoLabGeneralizedRCNN"
  WEIGHTS: "detectron2://ImageNetPretrained/MSRA/R-101.pkl"
  MASK_ON: False
  RESNETS:
    DEPTH: 101
    #OUT_FEATURES: ["res2", "res3", "res4", "res5"]
  #BACKBONE:
  #  NAME: "build_resnet_fpn_backbone"
  FPN:
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
  PROPOSAL_GENERATOR:
    NAME: "PseudoLabRPN"
  # RPN:
  #   POSITIVE_FRACTION: 0.25
  ROI_HEADS:
    NAME: "StandardROIHeadsPseudoLab"
    LOSS: "CrossEntropy" # variant: "CrossEntropy"
    NUM_CLASSES: 20
  ROI_BOX_HEAD:
    NAME: "FastRCNNConvFCHead"
    NUM_FC: 2
    POOLER_RESOLUTION: 7
SOLVER:
  LR_SCHEDULER_NAME: "WarmupTwoStageMultiStepLR"
  STEPS: (240000, 320000, 360000, 1440000)
  FACTOR_LIST: (1, 1, 1, 1, 1)
  MAX_ITER: 400000
  IMG_PER_BATCH_LABEL: 4
  IMG_PER_BATCH_UNLABEL: 4
  IMS_PER_BATCH: 4
  BASE_LR: 0.01
DATALOADER:
  SUP_PERCENT: 100.0
DATASETS:
  CROSS_DATASET: True
  TRAIN_LABEL: ("voc_2012_trainval","voc_2007_trainval")
  TRAIN_UNLABEL: ("Clipart1k_train",)
  TEST: ("Clipart1k_test",)
SEMISUPNET:
  Trainer: "ateacher"
  BBOX_THRESHOLD: 0.8
  TEACHER_UPDATE_ITER: 1
  BURN_UP_STEP: 80000
  EMA_KEEP_RATE: 0.9996
  UNSUP_LOSS_WEIGHT: 1.0
  SUP_LOSS_WEIGHT: 1.0
  DIS_TYPE: "res4" #["concate","p2","multi"]
TEST:
  EVAL_PERIOD: 4000
OUTPUT_DIR: ./output/faster_rcnn_R101_cross_clipart_mod
yujheli commented 2 years ago

@miurakenichi Really appreciate!

yujheli commented 2 years ago

@miurakenichi I reallized that you were using FPN which is always having improved performance. Did you try the config without FPN?

miurakenichi commented 2 years ago

@yujheli I did not intend to use FPN. Because the following is commented out.

  #BACKBONE:
  #  NAME: "build_resnet_fpn_backbone"

Do I have to comment out the following too?

  FPN:
    IN_FEATURES: ["res2", "res3", "res4", "res5"]
yujheli commented 2 years ago

@miurakenichi I see. If you are using the default backbone, then I think it is fine.