lorenmt / reco

The implementation of "Bootstrapping Semantic Segmentation with Regional Contrast" [ICLR 2022].
https://shikun.io/projects/regional-contrast
Other
162 stars 25 forks source link

Provide environment description #1

Closed DominikFilipiak closed 3 years ago

DominikFilipiak commented 3 years ago

Could you please provide the environment description in the readme (list of packages, their versions...)?

lorenmt commented 3 years ago

Hello Dominik,

Thanks for the reach out. The code here was written in PyTorch 1.7+ with minimal additional dependencies. The rest of the packages used are typically standard in the data science regimes, like numpy and matplotlib. I will update the readme to specify it in more details.

Sk.

DominikFilipiak commented 3 years ago

Thanks for your fast response.

In built_data.py, this code produces an error (TypeError: 'tuple' object is not callable)

if torch.rand(1) > 0.2:
    color_transform = transforms.ColorJitter.get_params((0.75, 1.25), (0.75, 1.25), (0.75, 1.25), (-0.25, 0.25))
    image = color_transform(image)

This error can be resolved with changing it in the following way (not sure if it does exactly the same):

color_transform = transforms.ColorJitter((0.75, 1.25), (0.75, 1.25), (0.75, 1.25), (-0.25, 0.25))

I assume that this is due to the version mismatch (I ran the code on pytorch=1.9 torchvision=0.10 cudatoolkit=11.1).

lorenmt commented 3 years ago

Hello,

Yes, I have reproduced your problem in the new version of torchvision, really appreciate your report. I will update the readme accordingly to address this issue.

Your modification is correct, you can also quickly check this function by loading a random image and see whether calling color_transform would result in a new image with a corresponding color jitter.

lorenmt commented 3 years ago

By the way, an alternative version which is exactly correspond to the old version that I used: https://pytorch.org/vision/0.8/_modules/torchvision/transforms/transforms.html#ColorJitter

would be:

if torch.rand(1) > 0.2:
    color_params = transforms.ColorJitter.get_params((0.75, 1.25), (0.75, 1.25), (0.75, 1.25), (-0.25, 0.25))

    color_transforms = \
        [transforms.Lambda(lambda img: transforms_f.adjust_brightness(img, color_params[1])),
         transforms.Lambda(lambda img: transforms_f.adjust_contrast(img, color_params[2])),
         transforms.Lambda(lambda img: transforms_f.adjust_saturation(img, color_params[3])),
         transforms.Lambda(lambda img: transforms_f.adjust_hue(img, color_params[4]))]

    random.shuffle(color_transforms)
    color_transform = transforms.Compose(color_transforms)
    image = color_transform(image)

But I believe there are the same...