suriyachaudary / Self-supervision-for-segmenting-overhead-imagery

[BMVC 2018] Self-supervised Feature Learning for Semantic Segmentation of Overhead Imagery.
MIT License
40 stars 7 forks source link

Replicating the experiment on Potsdam dataset #2

Open edizhuang opened 5 years ago

edizhuang commented 5 years ago

Hi Suriya,

I'm replicating your method on Potsdam dataset, however mean IOU and accuracy are lower than the results in the paper (I have 53.88 and 74.44 for 10% labeled data, 60.80 and 80.06 for 25% labeled data, 63.53 and 82.03 for 50% labeled data, 66.32 and 83.75 for 100% labeled data). Would that be the version difference? I'm using Pytorch 1.0.1.

And by using resnet18_encoderdecoder(), the only difference between your method and Context Encoders is to set "use_coach" parameter, right? And do you have the code to run other comparison methods such as ResNet-18 (autoencoder/scratch/ImageNet) as in table 2 of the paper?

Thanks!

suriyachaudary commented 5 years ago

Hi, I am assuming that the model is pre-trained with 100% unlabelled data (i.e., self_supervised_split = 'train_crops'). Could you please share the mean IoU numbers of the model trained with only random mask (in semantic segmentation section, replace iter_ = len(epochs) - 1 with iter_ = 0). This will give me more idea to understand the exact issue. We experienced fluctuating results within +/- 0.5% mIoU when the initial random seeds were changed.

Yes toggling use_coach flag in the notebook is sufficient to switch between our method and context encoders. Random patches from the image are erased when use_coach = False whereas the mask values are sampled from uniform distribution in iteration 0 or predicted by the coach network in iteration > 0 when use_coach = True.

For other ResNet-18 baselines in Table 2,

edizhuang commented 5 years ago

Hi Suriya,

I just see your reply recently. Yes I'm using 100% unlabeled data always.

For Context Encoders, I set usecoach = False, therefore epochs = [100] and iter = 0. The mean IoU and accuracy are also lower than the results in your paper (53.67 and 73.78 for 10% labeled data, 60.73 and 80.68 for 25% labeled data, 63.86 and 82.31 for 50% labeled data, 65.69 and 83.59 for 100% labeled data.

And the performance of your method (I got previously) and Context Encoders (now) are quite close, which is similar as in your paper. I'm using Pytorch 1.1.0 now and have no idea why the performance is lower.

Thanks!

M-Talha95 commented 4 years ago

Hello sir I am using your code as of my course project on Potsdam Dataset but getting errors. I am running it on Pytorch(Spyder) in windows. after loading the data it give error of shape in get_random_crop. Error 2: image_file_name = self.img_root + self.image_list[index] + self.img_suffix + '.jpg'

TypeError: can only concatenate list (not "str") to list Kindly give me your feedback as i am new to coding i do not understand why is this so

M-Talha95 commented 4 years ago

Secondly i am getting error of NoneType

File "C:\Users\TALHA\Desktop\Self-supervision-for-segmenting-overhead-imagery-master\utils\dataloaders.py", line 165, in get_random_crop roffset = torch.LongTensor(1).random(0, im.shape[0] - crop_shape[0] + 1)[0]

AttributeError: 'NoneType' object has no attribute 'shape'

I check it on internet people say its error because your images are none that is why it gove nonetype error..The path is same as of directory . Thanks

M-Talha95 commented 4 years ago

Hi Suriya,

I'm replicating your method on Potsdam dataset, however mean IOU and accuracy are lower than the results in the paper (I have 53.88 and 74.44 for 10% labeled data, 60.80 and 80.06 for 25% labeled data, 63.53 and 82.03 for 50% labeled data, 66.32 and 83.75 for 100% labeled data). Would that be the version difference? I'm using Pytorch 1.0.1.

And by using resnet18_encoderdecoder(), the only difference between your method and Context Encoders is to set "use_coach" parameter, right? And do you have the code to run other comparison methods such as ResNet-18 (autoencoder/scratch/ImageNet) as in table 2 of the paper?

Thanks!

How you create the images with name that are in train_crops.txt for training. Because the data i had downloaded from the website does not have the images with such names. Kindly guide me.