learnables / learn2learn

A PyTorch Library for Meta-learning Research
http://learn2learn.net
MIT License
2.59k stars 348 forks source link

Learn2Learn's TieredImageNet taking long time to load, QuickDraw errors during transformation stage? #392

Closed patricks-lab closed 1 year ago

patricks-lab commented 1 year ago

(Reposted here from learnables slack since slack history is limited & isn't a permanent record)

Two questions (this is on a project I'm working w/ Brando):

  1. I was wondering why TieredImageNet takes a long time to load (>10 mins, longer than other learn2learn datasets I've worked with). (FYI I had to manually download the dataset from kaggle - https://www.kaggle.com/datasets/andrijdavid/fsl-imagenet since the original gdrive link was broken). I just want to make sure this long time to load tieredImageNet is expected and normal, and not something that's wrong.

  2. I was trying to use learn2learn's QuickDraw but it seems like I'm getting errors when I attempt to apply transformations (resizing to 84x84, random crop) on it. The issue stems from when l2l's quickdraw library attempts to apply transformations onto a quickdraw image (that is apparently in the form of a np.memmap/.npy record that PIL can't understand) and so I get the following error:

    Traceback (most recent call last):
    File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/maml_patricks_l2l.py", line 2300, in <module>
    loop_through_l2l_indexable_benchmark_with_model_test()
    File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/maml_patricks_l2l.py", line 2259, in loop_through_l2l_indexable_benchmark_with_model_test
    for benchmark in [quickdraw_l2l_tasksets()]: #hdb8_l2l_tasksets(),hdb9_l2l_tasksets(), delaunay_l2l_tasksets()]:#[dtd_l2l_tasksets(), cu_birds_l2l_tasksets(), fc100_l2l_tasksets()]:
    File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/maml_patricks_l2l.py", line 2216, in quickdraw_l2l_tasksets
    _transforms: tuple[TaskTransform, TaskTransform, TaskTransform] = get_task_transforms_quickdraw(_datasets,
    File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/maml_patricks_l2l.py", line 2184, in get_task_transforms_quickdraw
    train_transforms: TaskTransform = DifferentTaskTransformIndexableForEachDataset(train_dataset,
    File "/home/pzy2/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/common.py", line 130, in __init__
    self.indexable_dataset = MetaDataset(indexable_dataset)
    File "learn2learn/data/meta_dataset.pyx", line 59, in learn2learn.data.meta_dataset.MetaDataset.__init__
    File "learn2learn/data/meta_dataset.pyx", line 96, in learn2learn.data.meta_dataset.MetaDataset.create_bookkeeping
    File "learn2learn/data/meta_dataset.pyx", line 65, in learn2learn.data.meta_dataset.MetaDataset.__getitem__
    File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/learn2learn/vision/datasets/quickdraw.py", line 511, in __getitem__
    image = self.transform(image)
    File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 60, in __call__
    img = t(img)
    File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
    File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 900, in forward
    i, j, h, w = self.get_params(img, self.scale, self.ratio)
    File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 859, in get_params
    width, height = F._get_image_size(img)
    File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/functional.py", line 67, in _get_image_size
    return F_pil._get_image_size(img)
    File "/home/pzy2/miniconda3/envs/metalearning3.9/lib/python3.9/site-packages/torchvision/transforms/functional_pil.py", line 26, in _get_image_size
    raise TypeError("Unexpected type {}".format(type(img)))
    TypeError: Unexpected type <class 'numpy.memmap'>
seba-1511 commented 1 year ago

Hello @patricks-lab,

  1. Tiered-ImageNet is larger than some of the other datasets, but even so 10 minutes seems excessive. Maybe something is off with the Kaggle version of the dataset. I've been meaning to upload the learn2learn versions to Zenodo.
  2. Could you provide a short colab replicating the issue? The fix might be as simple as parsing the memmap to an image with PIL.

Thanks,

brando90 commented 1 year ago

cross: https://stackoverflow.com/questions/76082746/how-to-transform-a-quickdraw-image-to-84-by-84-in-pytorch-using-the-learn2learn and https://discuss.pytorch.org/t/how-to-transform-a-quickdraw-image-to-84-by-84-in-pytorch-using-the-learn2learn-library/178292

seba-1511 commented 1 year ago

Closing since we'll be moving datasets to Zenodo in #400.