brando90 / pytorch-meta-dataset

A non-official 100% PyTorch implementation of META-DATASET benchmark for few-shot classification
0 stars 0 forks source link

error mds #24

Open brando90 opened 1 year ago

brando90 commented 1 year ago
====> about to start train loop
args.number_of_trainable_parameters=26257285
Starting training!
log_zeroth_step
Traceback (most recent call last):
  File "/lfs/ampere4/0/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_maml_torchmeta.py", line 188, in <module>
    main()
  File "/lfs/ampere4/0/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_maml_torchmeta.py", line 128, in main
    train(rank=-1, args=args)
  File "/lfs/ampere4/0/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_maml_torchmeta.py", line 166, in train
    meta_train_fixed_iterations(args, args.agent, args.dataloaders, args.opt, args.scheduler)
  File "/afs/cs.stanford.edu/u/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/training/meta_training.py", line 114, in meta_train_fixed_iterations
    log_zeroth_step(args, meta_learner)
  File "/afs/cs.stanford.edu/u/brando9/ultimate-utils/ultimate-utils-proj-src/uutils/logging_uu/wandb_logging/meta_learning.py", line 26, in log_zeroth_step
    batch = next(iter(args.dataloaders['train']))  # this might advance the dataloader one step
  File "/lfs/ampere4/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/lfs/ampere4/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/lfs/ampere4/0/brando9/miniconda/envs/mds_env_gpu/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch
    data.append(next(self.dataset_iter))
  File "/afs/cs.stanford.edu/u/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/pytorch_mds_lib/pytorch_meta_dataset/pipeline.py", line 209, in __iter__
    next_e = self.get_next(rand_source)
  File "/afs/cs.stanford.edu/u/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/pytorch_mds_lib/pytorch_meta_dataset/pipeline.py", line 228, in get_next
    dataset = next(self.dataset_list[source_id])
  File "/afs/cs.stanford.edu/u/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/pytorch_mds_lib/pytorch_meta_dataset/pipeline.py", line 110, in __iter__
    episode_description = self.sampler.sample_episode_description(self.random_gen)
  File "/afs/cs.stanford.edu/u/brando9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/dataloaders/pytorch_mds_lib/pytorch_meta_dataset/sampling.py", line 476, in sample_episode_description
    raise ValueError('Some classes do not have enough examples.')  # noqa: E111
ValueError: Some classes do not have enough examples.
patricks-lab commented 1 year ago

I fixed it on my testing by adding the line "args.min_examples_in_class = 20".

Does it work after that?

brando90 commented 1 year ago

I fixed it on my testing by adding the line "args.min_examples_in_class = 20".

where did you do this? Can you please get into the habit of pushing your changes? I see nothing.

brando90 commented 1 year ago

args.min_examples_in_class = 20

also can we have documentation somewhere (ideally a line before) seemingly arbitrary things we have to specify in the config functions so that we know why we are doing this? It's really a nightmare to come back 3 months later to see random things like this.

Also, your mds arg parse should be aimed to have as many of these params set so that the user can run an experiment with the minimum amount of effort him to modify things. @patricks-lab

brando90 commented 1 year ago

https://github.com/brando90/diversity-for-predictive-success-of-meta-learning/blob/eee330e2e0c0f1ba26e4e7438dbbeb3d59a15e43/div_src/diversity_src/experiment_mains/main_maml_torchmeta.py#L43

patricks-lab commented 1 year ago

Per pytorch-mds we have that

'--min_examples_in_class' => 'Classes that have less samples will be skipped'

so that means that if we set args.min_examples_in_class = args.k_shot + args.k_eval we ensure that each of the n ways sampled in our n-way k-shot task has enough samples for both the support and query sets.

I will soon add that line to the epsiodic and batch dataloaders with the comment.