Closed Markin-Wang closed 3 years ago
Hi,
Thanks for the interest. Could you send the exp_dict
you are using?
Hi,
Thanks for the interest. Could you send the
exp_dict
you are using?
Thanks for your early reply. The exp_dict
is shown as below. In addition, soybean is my own dataset which follows the similar setting of CUB, 100 base classes, 50 val and 50 test.
{'active_size': 0, 'avgpool': True, 'batch_size': 128, 'classes_test': 5, 'classes_train': 5, 'classes_val': 5, 'collate_fn': 'default', 'cross_entropy_weight': 1, 'dataset_test': 'soybean', 'dataset_test_root': 'soybean', 'dataset_train': 'soybean', 'dataset_train_root': 'soybean', 'dataset_val': 'soybean', 'dataset_val_root': 'soybean', 'distance_type': 'labelprop', 'dropout': 0.1, 'embedding_prop': True, 'few_shot_weight': 0, 'kernel_bound': '', 'lr': 0.2, 'max_epoch': 100, 'min_lr_decay': 0.0001, 'model': {'backbone': 'wrn', 'depth': 28, 'name': 'pretraining', 'transform_test': 'wrn_val', 'transform_train': 'wrn_pretrain_train', 'transform_val': 'wrn_val', 'width': 10}, 'n_classes': 100, 'ngpu': 3, 'patience': 10, 'query_size_test': 2, 'query_size_train': 2, 'query_size_val': 2, 'random_seed': 42, 'rotation_labels': [0, 1, 2, 3], 'rotation_weight': 1, 'support_size_test': 1, 'support_size_train': 5, 'support_size_val': 1, 'target_loss': 'val_accuracy', 'tasks_per_batch': 1, 'test_iters': 600, 'train_iters': 600, 'transform_test': 'wrn_val', 'transform_train': 'wrn_pretrain_train', 'transform_val': 'wrn_val', 'unlabeled_size_test': 0, 'unlabeled_size_train': 0, 'unlabeled_size_val': 0, 'val_iters': 600, 'weight_decay': 0.0005}
Ok, if you use a custom pre-training dataset, take into account that EPNet is also trained to predict image rotations, so the dataset outputs: x, y, r: images, labels, rotation labels (0=0, 1=90, 2=180, 3=270). See this to fix your dataset:
https://github.com/ElementAI/embedding-propagation/blob/master/src/datasets/miniimagenet.py#L85
If you do not want to use rotations just remove everything that uses r
:
https://github.com/ElementAI/embedding-propagation/blob/master/src/models/pretraining.py#L87
For instance:
x, y = batch
Ok, if you use a custom pre-training dataset, take into account that EPNet is also trained to predict image rotations, so the dataset outputs: x, y, r: images, labels, rotation labels (0=0, 1=90, 2=180, 3=270). See this to fix your dataset:
https://github.com/ElementAI/embedding-propagation/blob/master/src/datasets/miniimagenet.py#L85
If you do not want to use rotations just remove everything that uses
r
:https://github.com/ElementAI/embedding-propagation/blob/master/src/models/pretraining.py#L87
For instance:
x, y = batch
Thanks for your support. It now works.
Hi, thanks for your work. I am trying to pretrain the model and an error occurs: Traceback (most recent call last): File "trainval.py", line 220, in
pretrained_weights_dir=args.pretrained_weights_dir)
File "trainval.py", line 143, in trainval
score_dict.update(model.train_on_loader(train_loader))
File "embedding-propagation/src/models/pretraining.py", line 174, in train_on_loader
loss = self.train_on_batch(batch)
File "embedding-propagation/src/models/pretraining.py", line 87, in train_on_batch
x, y, r = batch
ValueError: not enough values to unpack (expected 3, got 2)
I am grateful if you can provide any support. Jun