WXinlong / DenseCL

Dense Contrastive Learning (DenseCL) for self-supervised representation learning, CVPR 2021 Oral.
GNU General Public License v3.0
544 stars 70 forks source link

How to visualize dense correspondence? #13

Open unlabeledData opened 3 years ago

unlabeledData commented 3 years ago

This is a great job. Could you give more details about the visualization of dense correspondence?

WXinlong commented 3 years ago

For two feature points at two views, if they are the most similar one to each other and the similarity is greater than a threshold, e.g., 0.9, this match is kept and visualized.

CoinCheung commented 3 years ago

@WXinlong Hi, I am also not clear with this part of paper. Would you please tell me some details:

  1. How to obtain feature points? Which feature is used to for matching, the feature directly from backbone(resnet output) or from the dense-head(used to compute loss)?
  2. Since the both the backbone and the dense-head have downsample rate of 32, how could we obtain exact position of points on the original image from the feature point?