Closed zhaohui-yang closed 4 years ago
In the make_cam.py file, https://github.com/jiwoon-ahn/irn/blob/master/step/make_cam.py#L42, why highres_cam and strided_cam use different unsqueeze dimension, for I think this doesn't make any difference?
Hi @zhaohui-yang,
@jiwoon-ahn Thanks, you solved my questions. Besides, in the resnet50_cam.py, the x is detached after passing the layer2, which means the variable x would not BP the gradients. However, in the train_cam.py, the parameters in layer1 and layer2 are packed in the backbone (trainable_parameters), which are also included in the PolyOptimizer. I wonder if you update the parameters in layer1 and layer2, also the bn weight and bias?
Will the mean shift introduce the gap between training and var? This layer inherits from BatchNorm2d. During training, x' = (x - mean)/std, however, during inference, x' = x - mean, which is different from training. What is the reason for the difference?
Thank u for your patience. By the way, is this the final version code? I have run this code for three times, which achieves 35.8, 36.0 and 36.2 mAP for instance segmentation (lower than 37.7 in Tab.1). The only thing I changed is I half the batch size while training the irn because of the GPU memory. So I wonder if this code would guarantee an mAP around 37.7 if I use larger batchsize?
I have confirmed this code alone can reproduce the reported results. Please try with different hyper-parameters.
Must be my problem. Thank you!
@jiwoon-ahn What kind of GPU do you use? P100 or V100? For my Titan with 12G could not fit in the batchsize=32 for train_irn. I modified the code to fit for Parallel training, but encountered the same problem with issue #13 .
@zhaohui-yang, Please refer to this comment. https://github.com/jiwoon-ahn/irn/issues/13#issuecomment-533217810 Thanks.
Congratulations! This is really good work!
As I was running your code, I find that train_aug.txt file was used to train CAM. I wonder where is this file comes from? And why not directly use VOC2012 trainval set?
Thanks a lot!