Open philnovv opened 2 years ago
Hello,
you are right that the "patches" are implicitly defined by the receptive field of each single multi-channel locations sampled from various layers of the encoder. Conceptually, the features can be anything from pixels to deep features. In practice, we pick every 4th layer output of the encoder network. For example, the first feature would be produced from the output of the first convolution on pixels without downsampling. Sorry if there was confusion.
Hi all,
Was just reading thru the paper and the code to get a better understanding of the implementation. Am I right to assume that the "patches" used for the InfoNCE loss are actually single multi-channel pixels sampled from various layers of the encoder? I'm inclined to think this (looking at the implementation), but the paper explicitly says that native (non-encoded) patches of image intensities are also used to compute the InfoNCE loss.
Any clarification would be great, thanks.