Closed JimZAI closed 2 years ago
In addition, when i use test_extractor_pa.py for 5-way 1-shot task, with the pre-trained/distilled multi-domain feature extractor, the average acc for URL (or TSA) is about 40% on ImageNet, which is far away from 49.6% (or 48.0% ) reported in Table 8 (Appendix of TSA).
Hi,
Thanks for your questions!
For the first question, I think you may already notice that we provided usage instruction of 5-way-5-shot and 5-way-1-shot here if you roll down the readme page to the 'Other Usage' section.
For the results, I just re-ran the experiments and I confirm that I can reproduce similar results as in the paper (around '47.58 +- 1.08' for URL and '47.10 +- 1.05' for TSA on ImageNet). I recommend you to use our code and re-run experiments for all methods. If you want to reproduce the results shown in the paper, you can set shuffle_buffer_size=0 in the reader file which was an issue in the original meta-dataset and make sure you use shuffled datasets as mentioned in the issue. I also re-ran the experiments by setting shuffle_buffer_size=0 and I got '49.55 +- 1.06' for URL and '49.03 +- 1.04' for TSA on ImageNet.
Note that, in our paper, all methods are evaluated under the same setting (shuffle_buffer_size=0 with shuffled datasets) for 5-shots and 1-shot settings and the ranking would be the same though results can be slightly affected by setting shuffle_buffer_size=1000. Please refer to our paper and supplementary for more details.
Hope this is helpful for you!
Best, WH
Thanks for your reply.
Indeed, i tried to re-run the test_extractor.py file for the 1/5-shot setting, following the Section of 'Other Usage'.
In fact, my key concern is that why there is no task adaptation process (update A{beta} for URL, or update A{\beta} and \alpha for TSA) in the file test_extractor.py. It seems that the training/support set of the target task is not utilized to update the model (in the 1/5-shot setting). Is my understanding correct? please correct me!Thanks.
test_extractor.py provides the implementation of adapting the model by learning a classifier (like NCC, SVM etc) on top of a frozen feature extractor using the support set for different settings. For testing our proposed URL and TSA, please use test_extractor_pa.py and test_extractor_tsa.py, respectively (Note that the parameters of the feature extractor are fixed in URL and TSA as well).
Thanks!
So the results in Table 4 were achieved by test_extractor_pa.py and test_extractor_tsa.py? Not test_extractor.py?
Table 4 compares results between different methods and results of URL and TSA can be obtained by test_extractor_pa.py and test_extractor_tsa.py respectively. URL and TSA are different methods. You should be able to obtain results of URL using test_extractor_tsa.py but using test_extractor_pa.py would be simpler.
test_extractor.py is used to obtain results in the first cell of Table 3.
If so, the Section of 'Other Usage' in README.md seems misleading?
I will try it again following your valuable suggestions. Thanks.
That section gives an example of specifying an evaluation setting. I will add more details on that section. Thanks!
Hi, Wei-Hong, thanks for your nice work, it brings me a lot of inspiration.
I wonder is there no model (adapter) update in the setting of 5-way 1-shot ? (FROM test_extractor.py). Or, how to get the results reported in Table 4?
Thanks! Jim