learnables / learn2learn

A PyTorch Library for Meta-learning Research
http://learn2learn.net
MIT License
2.61k stars 350 forks source link

Do we really need mini-imagenet padding to be 8? What just having it be 84 without the 8? #376

Closed brando90 closed 1 year ago

brando90 commented 1 year ago

e.g.

        train_data_transforms = Compose([
            ToPILImage(),
            RandomCrop(84, padding=8),  # todo: do we really need the padding = 8
            ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4),
            RandomHorizontalFlip(),
            ToTensor(),
            normalize,
        ])
brando90 commented 1 year ago

@seba-1511 I am wondering if this is a bug. Notice how the test transform is:

        test_data_transforms = Compose([
            normalize,
        ])

where I honestly would have expected something like:

        test_data_transforms = Compose([
                    Resize((84, 84)),
        transforms.ToTensor(),
        Normalize(mean=mean, std=std),
        ])

but the images are already 84? So when the random crop is being done...is it really being done correctly? An original Imagenet image is of size 469 but it seems they are not by default since the above transform does not do any type of resizing. I printed sizes without processing and they are 84. I am def a bit puzzled on what is going on.

ref: https://discuss.pytorch.org/t/why-isnt-randomcrop-inserting-the-padding-in-pytorch/166244

brando90 commented 1 year ago

I am finding more issues with the images. I printed some train images and some test/val images and they look odd: Test/val image:

Screen Shot 2022-11-17 at 12 43 29 PM

train:

Screen Shot 2022-11-16 at 4 56 12 PM

it is missing the padding. Do you know what is going on? @seba-1511 also the outpusize is always 84 even without a resizing or randomcrop. This is confusing.

seba-1511 commented 1 year ago

Yes, all images are 84x84 in the archive; the images are correct, unless your copy is corrupted.

In your train images you can see the padding, we shouldn't pad for test images.

brando90 commented 1 year ago

Yes, all images are 84x84 in the archive; the images are correct, unless your copy is corrupted.

In your train images you can see the padding, we shouldn't pad for test images.

Hi @seba-1511 thanks for your reply I appreciate it. They are not currpted because they look like objects. What is puzzling to me is why the padding is missing from the images. I have created fully reproducible script for you. I will paste the images this script prodocues -- with missing padding.

Screen Shot 2022-11-17 at 4 02 01 PM Screen Shot 2022-11-17 at 4 02 05 PM Screen Shot 2022-11-17 at 4 01 56 PM Screen Shot 2022-11-17 at 4 01 44 PM
brando90 commented 1 year ago

this is how an image with padding should look like:

Screen Shot 2022-11-17 at 12 22 52 PM
brando90 commented 1 year ago

code:

def check_size_of_mini_imagenet_original_img():
    import random
    import numpy as np
    import torch
    import os
    seed = 0
    os.environ["PYTHONHASHSEED"] = str(seed)
    torch.manual_seed(seed)
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False
    np.random.seed(seed)
    random.seed(seed)

    import learn2learn
    batch_size = 5
    kwargs: dict = dict(name='mini-imagenet', train_ways=2, train_samples=2, test_ways=2, test_samples=2)
    kwargs['data_augmentation'] = 'lee2019'
    benchmark: learn2learn.BenchmarkTasksets = learn2learn.vision.benchmarks.get_tasksets(**kwargs)
    tasksets = [(split, getattr(benchmark, split)) for split in splits]
    for i, (split, taskset) in enumerate(tasksets):
        print(f'{taskset=}')
        print(f'{taskset.dataset.dataset.transform=}')
        for task_num in range(batch_size):
            X, y = taskset.sample()
            print(f'{X.size()=}')
            assert X.size(2) == 84
            print(f'{y.size()=}')
            print(f'{y=}')
            for img_idx in range(X.size(0)):
                visualize_pytorch_tensor_img(X[img_idx], show_img_now=True)
                if img_idx >= 5:  # print 5 images only
                    break
            # visualize_pytorch_batch_of_imgs(X, show_img_now=True)
            print()
            if task_num >= 4:  # so to get a MI image finally (note omniglot does not have padding at train...oops!)
                break
            break
        break

and

def visualize_pytorch_tensor_img(tensor_image: torch.Tensor, show_img_now: bool = False):
    """
    Due to channel orders not agreeing in pt and matplot lib.
    Given a Tensor representing the image, use .permute() to put the channels as the last dimension:

    ref: https://stackoverflow.com/questions/53623472/how-do-i-display-a-single-image-in-pytorch
    """
    from matplotlib import pyplot as plt
    assert len(tensor_image.size()) == 3, f'Err your tensor is the wrong shape {tensor_image.size()=}' \
                                          f'likely it should have been a single tensor with 3 channels' \
                                          f'i.e. CHW.'
    if tensor_image.size(0) == 3:  # three chanels
        plt.imshow(tensor_image.permute(1, 2, 0))
    else:
        plt.imshow(tensor_image)
    if show_img_now:
        plt.tight_layout()
        plt.show()
brando90 commented 1 year ago

this happens when I do it in torchmeta too (so padding is missing, documented it here https://stackoverflow.com/questions/74482017/why-isnt-randomcrop-inserting-the-padding-in-pytorch )

So I must conclude torch has a bug. Printing it's version:

sys.version='3.9.7 (default, Sep 16 2021, 08:50:36) \n[Clang 10.0.0 ]'
torch.__version__='1.9.1'
seba-1511 commented 1 year ago

They are padded to 84+8 then cropped back to 84: you can see the black padding on each image (eg, on the left for the 2nd image).

brando90 commented 1 year ago

They are padded to 84+8 then cropped back to 84: you can see the black padding on each image (eg, on the left for the 2nd image).

weird. I just discovered that by doing it on cifar. But note this NOT what the docs say for RandomCrop:

Optional padding on each border of the image. Default is None. If a single int is provided this is used to pad all borders.

it says something very similar to pad:

Padding on each border. If a single int is provided this is used to pad all borders.

brando90 commented 1 year ago

pytorch gitissue: https://github.com/pytorch/vision/issues/6967

brando90 commented 1 year ago

They are padded to 84+8 then cropped back to 84: you can see the black padding on each image (eg, on the left for the 2nd image).

it looks like the pad size is random based on the croping. Is the padding added on all sides before the crop?