dvornikita / fewshot_ensemble

Ensembles of CNNs for few-shot image classification
53 stars 7 forks source link

ValueError: Sample larger than population #3

Closed shivamsaboo17 closed 4 years ago

shivamsaboo17 commented 4 years ago

Getting this error from meta_dataset.py file

chosen_class_inds = random.sample(
                all_class_inds, self.n_train + self.n_test)

I am training a single model on cub dataset. Do you know what might be causing this?

Values of self.n_train -> 15, self.n_test -> 5, len(all_class_inds) -> 30

dvornikita commented 4 years ago

Hi,

Could you please give me the command you are running and the full error trace?

Also, how do you know that the "Values of self.n_train -> 15, self.n_test -> 5, len(all_class_inds) -> 30"? Did you check this with a debugger/print or those are the intended values?

shivamsaboo17 commented 4 years ago

Hi @dvornikita , thanks for quick reply. The code is working now. I created non overlapping class wise splits in train, val and test csv for cub dataset (random splitting of classes). I had a question however,

I was training a single model using the command:

python singles/train.py --model.model_name=wideresnet --data.dataset=cub --model.backbone=wide 

Now as my splits are non overlapping in train, validation and test set (which is the case in few shot learning), can you please tell me how is the validation accurracy computed in this single model as classes in training and validation set will be different? Are you using distance based classifier in single model as well and computing embedding vectors by using few shot set from validation dataset? Or are you extending the model by modifying the last layer and finetuning or something else?

Thanks!

dvornikita commented 4 years ago

I am glad you made it work.

As for validation and test, we only use the feature extractor CNN (with no fully-connected layer in the end) and build a prototype classifier on the obtained features. You can read about it in Section 3 of the original paper.

On Mon, 23 Mar 2020 at 10:57, Shivam Saboo notifications@github.com wrote:

Hi @dvornikita https://github.com/dvornikita , thanks for quick reply. The code is working now. I created non overlapping class wise splits in train, val and test csv for cub dataset (random splitting of classes). I had a question however,

I was training a single model using the command:

python singles/train.py --model.model_name=wideresnet --data.dataset=mini_imagenet --model.backbone=wide

Now as my splits are non overlapping in train, validation and test set (which is the case in few shot learning), can you please tell me how is the validation accurracy computed in this single model as classes in training and validation set will be different? Are you using distance based classifier in single model as well and computing embedding vectors by using few shot set from validation dataset? Or are you extending the model by modifying the last layer and finetuning or something else?

Thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dvornikita/fewshot_ensemble/issues/3#issuecomment-602493920, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADKL35266I3FXXOG7ICRTALRI4XBRANCNFSM4LRVWDGA .

shivamsaboo17 commented 4 years ago

Thanks for clarifying, will look at the paper for more details!