Closed ltqusst closed 6 years ago
Sorry for my negligence, actually only res1 & res2 layers are fixed during stage 1 RPN train & stage 2 RFCN train, and after that in stage 2 & 3 RPN was fine-tuned with all convolution layers fixed, and that was much more sound.
in the end2end aproximate joint training case, fine-tune is enabled for all layers after res2c of backbone.
I saw following lines inside backbone network in rfcn_ohem_train.pt param {lr_mult: 0.0 decay_mult: 0.0}
Does that mean all convolution layers of backbone network never got fine-tuned during R-FCN training?