Closed zdluffy closed 5 years ago
Yes.
train_image_size
is the size of input images during training, which may give more information for segmentation if it is larger. batch_size
* gpu_num
is the total batch size of each step, which makes the batch normalization more stable if it is larger. So both are quite important.
Got it! Thank you very much!
Hello! I am new in semantic segmention and deep learning, so I I might ask a stupid a question. Due to the limit of my GPU resource, I trained resnet-50 on Cityscapes with the following command.
python ./train.py --batch_size 4 --gpu_num 1 --consider_dilated 1 --weight_decay_rate 0.0001 --weight_decay_rate2 0.001 --random_rotate 0 --database 'Cityscapes' --train_image_size 512 --test_image_size 512
only changed the train_image_size and test_image_size from 816 to 512, and the gpu_num is 1 not 4. Then I got 69.93 mIoU scores on the val dataset, which is lower than yours. I wonder if changing the image size and GPU number will affect the performance?