Closed brando90 closed 1 year ago
related, why are the images not of size 100? https://stackoverflow.com/questions/72208865/why-is-randomcrop-with-size-84-and-padding-8-returning-an-image-size-of-84-and-n
croping is done after padding. Idk why but it is. Seems weird to me.
https://pytorch.org/vision/main/generated/torchvision.transforms.RandomCrop.html
pad_if_needed (boolean) – It will pad the image if smaller than the desired size to avoid raising an exception. Since cropping is done after padding, the padding seems to be done at a random offset.
Closing: inactive.
I want a few-shot learning data set that works similar to meta-data set (as a first step to reach that) i.e. sample a data set first then create a n-way, k-shot task from it. Based on the following slack discussion:
The slack discussion suggested creating a indexable data set, a task transform that indexed that and then giving that to TaskDataset. I don't think taht works because the transforms require the dataset at creation time. Thus instead what I did is to create a single transform that dynamically gets the data set and then creates the task transforms with it.
I think it works since the print statement display different n-way class indices and the size of the images look correct to me. Will post here in case it's useful to someone else and most importantly to correct it if it's wrong (since it's not following what @seba-1511 initially suggested):
output:
related: meta-data set gitissue: https://github.com/learnables/learn2learn/issues/286