jiwoon-ahn / irn

Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations, CVPR 2019 (Oral)
MIT License
519 stars 100 forks source link

About train_aug.txt #9

Closed zhaohui-yang closed 4 years ago

zhaohui-yang commented 4 years ago

Congratulations! This is really good work!

As I was running your code, I find that train_aug.txt file was used to train CAM. I wonder where is this file comes from? And why not directly use VOC2012 trainval set?

Thanks a lot!

zhaohui-yang commented 4 years ago

In the make_cam.py file, https://github.com/jiwoon-ahn/irn/blob/master/step/make_cam.py#L42, why highres_cam and strided_cam use different unsqueeze dimension, for I think this doesn't make any difference?

jiwoon-ahn commented 4 years ago

Hi @zhaohui-yang,

  1. It is common practice. Please refer to Sec 6.1. of the paper.
  2. Yes, they are the same.
zhaohui-yang commented 4 years ago

@jiwoon-ahn Thanks, you solved my questions. Besides, in the resnet50_cam.py, the x is detached after passing the layer2, which means the variable x would not BP the gradients. However, in the train_cam.py, the parameters in layer1 and layer2 are packed in the backbone (trainable_parameters), which are also included in the PolyOptimizer. I wonder if you update the parameters in layer1 and layer2, also the bn weight and bias?

zhaohui-yang commented 4 years ago

Will the mean shift introduce the gap between training and var? This layer inherits from BatchNorm2d. During training, x' = (x - mean)/std, however, during inference, x' = x - mean, which is different from training. What is the reason for the difference?

jiwoon-ahn commented 4 years ago
  1. All parameters including the BatchNorms in layer1 and layer2 do not receive gradients, thus they are not updated.
  2. During training, MeanShift layer stores the moving average and returns the input itself. It means that the layer acts as an identity function, where the update of the moving average is simply achieved by calling BatchNorm.
zhaohui-yang commented 4 years ago

Thank u for your patience. By the way, is this the final version code? I have run this code for three times, which achieves 35.8, 36.0 and 36.2 mAP for instance segmentation (lower than 37.7 in Tab.1). The only thing I changed is I half the batch size while training the irn because of the GPU memory. So I wonder if this code would guarantee an mAP around 37.7 if I use larger batchsize?

jiwoon-ahn commented 4 years ago

I have confirmed this code alone can reproduce the reported results. Please try with different hyper-parameters.

zhaohui-yang commented 4 years ago

Must be my problem. Thank you!

zhaohui-yang commented 4 years ago

@jiwoon-ahn What kind of GPU do you use? P100 or V100? For my Titan with 12G could not fit in the batchsize=32 for train_irn. I modified the code to fit for Parallel training, but encountered the same problem with issue #13 .

jiwoon-ahn commented 4 years ago

@zhaohui-yang, Please refer to this comment. https://github.com/jiwoon-ahn/irn/issues/13#issuecomment-533217810 Thanks.