Closed kunhe closed 6 years ago
Our paper doesn't aims to solve Zero-Shot Retrieval Problem. We only test on this protocol and find it still outperforms other methods which also don't consider Zero-Shot Retrieval Problem. If your method aims to solve Zero-Shot Retrieval Problem, you need to evaluate your hashing method by Zero-Shot Retrieval Protocol. If your method aims to solve Hashing Problem with train and test from the same domain, you don't need to test on Zero-Shot Retrieval protocol.
Good work! For reproducibility and comparison purposes can you clarify and give details about the zero-shot protocol (it is not clear in [28])?
Just remove one class in training and use the other 99 (ImageNet for example) classes for training. Then test on that one class.
In the zero-shot protocol, how is the data set divided in detail? In other words, how many samples of the unseen class dataset are selected as query dataset? Thank you very much~
I remember I randomly select out one class but forget which class. I remember I tried several times and the performance did not change much.
Hi, congrats on the nice work.
I'm wondering if it's possible for you to release the details of Zero-Shot Retrieval Protocol on ImageNet100.
In Table 2 of the paper, the protocol refers to [28]: C. Ma, I. W. Tsang, F. Peng, and C. Liu. Partial hash update via hamming subspace learning. TIP 2017. But unfortunately this paper is behind a paywall.
Also, this is pure guessing, but I'm wondering if the correct reference should be: How should we evaluate supervised hashing? Alexandre Sablayrolles, Matthijs Douze, Nicolas Usunier, Hervé Jégou ICASSP 2017