mapillary / inplace_abn

In-Place Activated BatchNorm for Memory-Optimized Training of DNNs
BSD 3-Clause "New" or "Revised" License
1.32k stars 187 forks source link

Performance drops compared with Pytorch0.4.1 #136

Open PkuRainBow opened 5 years ago

PkuRainBow commented 5 years ago

Really good work! We have attempted to train our model with the pytorch1.2 and the updated your latest inplace ABN. We find that the performance (with all the same training settings) slightly drops compared with the previous pytorch0.4.1.

We report their results on Cityscapes val set as below,

pytorch1.2 FCN (ResNet101) : 75.5 pytorch0.4.1 FCN (ResNet101) : 76.0

It would be great if anyone could share with me some advice!

bwang-delft commented 5 years ago

Hi, are you using the same random seeds for numpy, cuda and dataloader?

PkuRainBow commented 5 years ago

@bwang-delft We simple set the random seeds as below (the default value of seed is 304),

        random.seed(args_parser.seed)
        torch.manual_seed(args_parser.seed)
bwang-delft commented 5 years ago

@bwang-delft We simple set the random seeds as below (the default value of seed is 304),

        random.seed(args_parser.seed)
        torch.manual_seed(args_parser.seed)

I think you also need to set the random seed for CUDA if you are using GPU. I'm not sure how to set the random seed for dataloader. I think https://github.com/pytorch/pytorch/issues/7068 discusses how to do that