Closed Hazel-Heejeong-Nam closed 6 months ago
The feature extractor is frozen after self-supervised training (or you can simply pick ImageNet pretrained feature extractors). Common hyper-parameters are shared and fixed for all aggregation methods afterwards.
Hi! Could you let me know a base model you chose for optimization while comparing all the performances of reproduced models? Specifically for table 1 in your paper, did you optimize hyperparameters algorithm-wise or just using fixed feature extractor which was optimized to a single model (e.g. your suggested model or max-pooling or something else..) Thank you in advance!