Open JiuqingDong opened 1 year ago
How did you manage to make this thing work from the repository in the first place?
Hi, yes. This option turns on linear_prob and the current command should work. One might need to turn the hyper-parameters a bit (e.g., the training epoch, learning rate, and weight decay) to obtain the best performance.
Hi, yes. This option turns on linear_prob and the current command should work. One might need to turn the hyper-parameters a bit (e.g., the training epoch, learning rate, and weight decay) to obtain the best performance.
But, when I did this, the training time is the same. I guess that the training time of the linear_prob is less than the full model finetune. So, I am not sure if it is correct.
Hi, I want to do linear prob on the COCO dataset. Is it correct? I just added a parameter 'SOLVER.TUNING_HIGHLEVEL_OVERRIDE linear_prob' to the last line.
python -m torch.distributed.launch --nproc_per_node=3 tools/train_net.py \ --config-file "./configs/pretrain/glip_Swin_T_O365_GoldG.yaml" \ --skip-test \ MODEL.WEIGHT "./glip_tiny_model_o365_goldg_cc_sbu.pth" \ DATASETS.TRAIN '("coco_grounding_train", )' \ MODEL.BACKBONE.FREEZE_CONV_BODY_AT -1 \ SOLVER.IMS_PER_BATCH 3 \ SOLVER.USE_AMP True \ SOLVER.MAX_EPOCH 1 \ TEST.DURING_TRAINING False \ TEST.IMS_PER_BATCH 3 \ SOLVER.FIND_UNUSED_PARAMETERS False \ SOLVER.BASE_LR 0.00001 \ SOLVER.LANG_LR 0.00001 \ SOLVER.STEPS (0.67,0.89) \ DATASETS.DISABLE_SHUFFLE True \ MODEL.DYHEAD.SCORE_AGG "MEAN" \ TEST.EVAL_TASK detection \ OUTPUT_DIR "./output/coco" \ SOLVER.TUNING_HIGHLEVEL_OVERRIDE linear_prob