RM-Zhang / SCPNet

This is the official implementation of ECCV2024 paper “SCPNet: Unsupervised Cross-modal Homography Estimation via Intra-modal Self-supervised Learning”.
Apache License 2.0
15 stars 1 forks source link

About experiments setting #1

Open songsang7 opened 1 month ago

songsang7 commented 1 month ago

Hello. I found your SCPNet paper very impressive and enjoyed reading it. I have 3 questions regarding the experiments:

  1. In the paper, you mentioned, 'For the RGB/NIR dataset, we use 103 pairs of images for training and 153 pairs for testing.' Could you provide more details on how these pairs were selected?(or which pairs were selected?)

  2. For 'flash' & 'harvard' dataset, how to divide train/test? Which pairs were selected to be training or testing?

  3. In the provided code, it seems that A.RandomBrightnessContrast() is applied each time getitem() is called on the dataset. However, I didn't see this mentioned in the paper. Could you please confirm if this is correct?

Thank you in advance for your assistance.

RM-Zhang commented 1 month ago

Thanks for your interest.

  1. (Q1 & Q2) The training/test pairs are randomly divided. We have uploaded the supplementary material in this repo (docs/SCPNet-supp.pdf). Please refer to it for more details.
  2. (Q3) A.RandomBrightnessContrast() is a common data augmentation technique. We use it in intra-modal self-supervised learning to further enhance the generalization ability.