Closed ambekarsameer96 closed 2 years ago
@seba-1511 can you please take a look.
Yes, correct since NWays and KShots sample their data randomly (akin to shuffling, but not quite the same).
Hello @seba-1511, Thanks for the repo and the examples. I have a doubt about maml-miniimagenet example -
# Separate data into adaptation/evalutation sets adaptation_indices = np.zeros(data.size(0), dtype=bool) adaptation_indices[np.arange(shots*ways) * 2] = True evaluation_indices = torch.from_numpy(~adaptation_indices) adaptation_indices = torch.from_numpy(adaptation_indices) adaptation_data, adaptation_labels = data[adaptation_indices], labels[adaptation_indices] evaluation_data, evaluation_labels = data[evaluation_indices], labels[evaluation_indices]
Can we use sklearn.model_selection.train_test_split with stratify instead which allows to split the samples based on the classes that is it ensures that every class has at least one sample in train and test split?
Hi if I am using the following for the custom dataset, does it ensure that the training samples across all classes are being shuffled for every iteration in the loop as shown below? (Since there is no transform for shuffling)
After this, I directly make use of a loop like this: (Since task.sampler() is not available for custom dataset)