Open PkuRainBow opened 6 years ago
target = np.array(mask).astype('int32')
during the training@zhanghang1989 I mean padding zero for the label map~
Actually, it is padding -1
(ignored during the training), because we use the _mask_transform()
to handle it.
@zhanghang1989 Thanks, I have got it.
But it seems that in your implementation(on ADE20K, Pascal Context) you will compute the mIou w/o background class on the validation set and w background class on test set(by calling the _mask_transform only for validationphase). Besides, all of your implementations use the background class for training. I am wondering whether should we train over background classes for both ADE20K and Pascal Context. Is this a standard setting?
if args.eval:
testset = get_segmentation_dataset(args.dataset, split='val', mode='testval',
transform=input_transform)
else:
testset = get_segmentation_dataset(args.dataset, split='test', mode='test',
transform=input_transform)
On the testset, we don't have a ground truth mask. We just submitted to the server
@zhanghang1989, do u think if I fill the padding area with the void label for both images and masks that will make sense? For my dataset has no background class, but I set the labels from 0 to num-1.
label=-1
will be ignored during the training.
Hi, @PkuRainBow ,do u train over background classes for Pascal Context?
I noticed that you illustrate an example with Pascal Context dataset,
And I check your implementation and find that it seems that you compute the mIoU w/o considering the background classes.
Could you give me a guide about how to deal with the background class as our model is trained over 59 classes?
Thus I am wondering should we change line#19 within file "PyTorch-Encoding/encoding/datasets/pcontext.py"?
Besides, I also notice that you pad zero during training, but such padding can introduce extra noises.