Closed tangdouer closed 2 years ago
Hi,
In order to run the results of EASY 3xResNet12 on miniimagenet:
Inductive
$ python main.py --dataset-path "<dataset-path>" --dataset miniimagenet --model resnet12 --test-features "[<path>/minifeaturesAS1.pt11, <path>/minifeaturesAS2.pt11, <path>/minifeaturesAS3.pt11]" --preprocessing ME
Transductive
$ python main.py --dataset-path "<dataset-path>" --dataset miniimagenet --model resnet12 --test-features "[<path>/minifeaturesAS1.pt11, <path>/minifeaturesAS2.pt11, <path>/minifeaturesAS3.pt11]" --postrocessing ME --transductive --transductive-softkmeans --transductive-temperature-softkmeans 5
Thank you~
--test-features "[
How should these three files be obtained above?
Hi,
If your question is on where to download the files : The files are in the link provided in the README.md : https://drive.google.com/drive/folders/1uc-uzAt1peo3FuEDOFIolWSoq2o8kUSU For all the other files here is the link : https://drive.google.com/drive/folders/1fMeapvuR6Rby0HDHd5L74BEXRyiOF942 I recommend you to check the README.md where the structure of the files is explained.
If your question is on how to compute the accuracy, you have to run:
$ python main.py --dataset-path "<dataset-path>" --dataset miniimagenet --model resnet12 --preprocessing "ME" --test-features "[\"featuresAS1.pt11\",\"featuresAS2.pt11\",\"featuresAS3.pt11\"]" --n-shots 1
Of course you can add --transductive, or change the number of shots.
If your question is on how to generate the features from the pretrained backbones, assuming you already have 3 pre-trained backbone files, you need to run the following command 3 times, one for each backbone:
$ python main.py --dataset-path "<dataset-path>" --dataset miniimagenet --model resnet12 --epochs 0 --load-model "<path>/mini<backbone-number>.pt1/"--save-features "<path>/minifeaturesAS<backbone-number>.pt1" --n-shots 1 --sample-aug 30
Where <backbone-number>
is the number of the backbone from 1 to 3. You can find the pre-trained ResNet12 backbones here : https://drive.google.com/drive/folders/1nrf8kWEQ9SgVpuejnwwBTqlTuYSKVRDp
If you would like to re-train new backbones from the raw datasets :
$ python main.py --dataset-path "<dataset-path>" --dataset miniimagenet --model resnet12 --epochs 0 --manifold-mixup 500 --rotations --cosine --gamma 0.9 --milestones 100 --batch-size 128 --preprocessing ME
I hope this answers your questions, Best,
Thank you for your patience in solving my problem, and sorry to bother you again.
I still have a question, is the training script of the three backbones as follows.
$ python main.py --dataset-path "
Hi, Don't worry and don't hesitate to ask more questions if it's still not clear.
$ python main.py --dataset-path "" --dataset miniimagenet --model resnet12 --epochs 0 --manifold-mixup 500 --rotations --cosine --gamma 0.9 --milestones 100 --skip-epochs 450 --batch-size 128 --preprocessing ME --save-model "<path>/mini1.pt1" --n-shot 1
$ python main.py --dataset-path "" --dataset miniimagenet --model resnet12 --epochs 0 --manifold-mixup 500 --rotations --cosine --gamma 0.9 --milestones 100 --skip-epochs 450 --batch-size 128 --preprocessing ME --save-model "<path>/mini2.pt1" --n-shot 1
$ python main.py --dataset-path "" --dataset miniimagenet --model resnet12 --epochs 0 --manifold-mixup 500 --rotations --cosine --gamma 0.9 --milestones 100 --skip-epochs 450 --batch-size 128 --preprocessing ME --save-model "<path>/mini3.pt1" --n-shot 1
The three backbones should have the same training routine. The only difference is their initialization.
I will add these commands to the README.md file as it might not be clear. Thank you for pointing it out.
Best,
Thank you again. I get.
The order is as follows:
(1) save model(random seed)
$ python main.py --dataset-path "" --dataset miniimagenet --model resnet12 --epochs 0 --manifold-mixup 500 --rotations --cosine --gamma 0.9 --milestones 100 --skip-epochs 450 --batch-size 128 --preprocessing ME --save-model "
$ python main.py --dataset-path "" --dataset miniimagenet --model resnet12 --epochs 0 --manifold-mixup 500 --rotations --cosine --gamma 0.9 --milestones 100 --skip-epochs 450 --batch-size 128 --preprocessing ME --save-model "
$ python main.py --dataset-path "" --dataset miniimagenet --model resnet12 --epochs 0 --manifold-mixup 500 --rotations --cosine --gamma 0.9 --milestones 100 --skip-epochs 450 --batch-size 128 --preprocessing ME --save-model "
I'm so sorry. I have an easy problem. (1) why set epoch=0?
Hi, The order and the commands are correct. For (2) save features, you can also specify --batch-size if you want to go faster as the default one is 64.
--epochs is set to 0 because we use --manifold-mixup 500 which is 500 epochs of manifold mixup. Let's take 2 examples where you want to use argument epochs :
Therefore, --epochs is adding extra epochs without mixup in the beginning of the training, you can use it if you don't want to use --manifold-mixup. You can see it in these 2 lines of code :
args.epochs
and args.manifold_mixup
are summedtrain()
will receive the boolean argument : mm = epoch >= args.epochs
. If the current epoch is bigger than --epochs it will perform manifold-mixup.I hope this answers your questions, good luck with the experiments!
Thank you again. I see.
Best,
Thank you again. I see.
Best,
Excuse me, I want to know if you have reproduced the performance of 3xResNet12 in the paper with the above commands?
Hi,
- If your question is on where to download the files : The files are in the link provided in the README.md : https://drive.google.com/drive/folders/1uc-uzAt1peo3FuEDOFIolWSoq2o8kUSU For all the other files here is the link : https://drive.google.com/drive/folders/1fMeapvuR6Rby0HDHd5L74BEXRyiOF942 I recommend you to check the README.md where the structure of the files is explained.
- If your question is on how to compute the accuracy, you have to run:
$ python main.py --dataset-path "<dataset-path>" --dataset miniimagenet --model resnet12 --preprocessing "ME" --test-features "[\"featuresAS1.pt11\",\"featuresAS2.pt11\",\"featuresAS3.pt11\"]" --n-shots 1
Of course you can add --transductive, or change the number of shots.- If your question is on how to generate the features from the pretrained backbones, assuming you already have 3 pre-trained backbone files, you need to run the following command 3 times, one for each backbone:
$ python main.py --dataset-path "<dataset-path>" --dataset miniimagenet --model resnet12 --epochs 0 --load-model "<path>/mini<backbone-number>.pt1/"--save-features "<path>/minifeaturesAS<backbone-number>.pt1" --n-shots 1 --sample-aug 30
Where<backbone-number>
is the number of the backbone from 1 to 3. You can find the pre-trained ResNet12 backbones here : https://drive.google.com/drive/folders/1nrf8kWEQ9SgVpuejnwwBTqlTuYSKVRDp- If you would like to re-train new backbones from the raw datasets :
$ python main.py --dataset-path "<dataset-path>" --dataset miniimagenet --model resnet12 --epochs 0 --manifold-mixup 500 --rotations --cosine --gamma 0.9 --milestones 100 --batch-size 128 --preprocessing ME
I hope this answers your questions, Best,
I follow your instruction :
python main.py --dataset-path "
Then I use the instruction to test with ASY(uses three features respectively): python main.py --dataset-path "/home/stud/dyh/vir/data/" --dataset miniimagenet --model resnet12 --test-features 'saves/features/minifeaturesAS1.pt55' --preprocessing ME
However, the result is this: Inductive 1-shot: 58.44% (± 0.20%) Inductive 5-shot: 75.34% (± 0.16%)
When I use EASY, the result is: Inductive 1-shot: 59.67% (± 0.21%) Inductive 5-shot: 76.83% (± 0.16%).
Is there something wrong?
What is the running command for the experimental results of EASY 3×ResNet12