facebookresearch / adaptive_teacher

This repo provides the source code for "Cross-Domain Adaptive Teacher for Object Detection".
Other
180 stars 35 forks source link

question about selecting best model (validation or post last step?) #51

Open shjustinbaek opened 1 year ago

shjustinbaek commented 1 year ago

Hi, I have been reading your paper and code, and I am confused about how the best model of the entire training process is selected.

this is how I understood the training code

  1. model training (both burn-in and mutual learning stage) is performed on train data
  2. model weight is saved every 5000 steps, by hooks.PeriodicCheckpointer
  3. After the last training step is finished (MAX_ITER reached), resulting weight is used for evaluation

Please correct me if i am wrong.

and my questions are: a. Should I take the model weight after the last training step as the final model weight for future inference? b. It seems validation loss/metric is not calculated in the code, but in the paper there is a plot of validation mAP (Figure 4 ) Are the metrics reported on the paper calculated with post last training step weights or weight selected based on validation set? c. Is there a model selection based on validation loss/metric function that i missed in this repo?

Thank you for the great paper and code I found the contents really interesting. Thanks in advance!

yujheli commented 1 year ago

a. I usually test the model weight of the middle training scenario which has better performance. b. I calculate them in tensorboard. I downloaded the csv file and draw the curve using python matplotlib. c. I select the best model from the tensorboard curve and go back to search for the saved checkpoint.