Thanks for your wonderful job. I have retrained the part segmentation model that you implemented in the repo "https://github.com/huanghoujing/PyTorch-Encoding", but my result is only pixAcc: 0.8866, mIoU: 0.6208. It is much worse than the released model (pixAcc: 0.9034, mIoU: 0.6670). So I wonder whether the training hyper-params is different from yours. I have selected the default hyper-params as follows:
CUDA_VISIBLE_DEVICES=0 \
python experiments/coco_part/train.py \
--norm_layer bn \
--train-split train \
--batch-size 16 \
--test-batch-size 16 \
--exp_dir exp/train
Thanks for your wonderful job. I have retrained the part segmentation model that you implemented in the repo "https://github.com/huanghoujing/PyTorch-Encoding", but my result is only pixAcc: 0.8866, mIoU: 0.6208. It is much worse than the released model (pixAcc: 0.9034, mIoU: 0.6670). So I wonder whether the training hyper-params is different from yours. I have selected the default hyper-params as follows: CUDA_VISIBLE_DEVICES=0 \ python experiments/coco_part/train.py \ --norm_layer bn \ --train-split train \ --batch-size 16 \ --test-batch-size 16 \ --exp_dir exp/train