QtacierP / ISECRET

I-SECRET: Importance-guided fundus image enhancement via semi-supervised contrastive constraining
MIT License
25 stars 2 forks source link

what is the training set, validation set and testing set in your work? #4

Closed RuoyuGuo closed 1 year ago

RuoyuGuo commented 1 year ago

Hi, thank you for releasing such excellent research on image enhancement. May I ask a few questions about your experiments?

  1. How you split the EyeQ dataset into training/val/testing sets? In your paper, you mentioned that We utilize EyeQ [3] as our first dataset, which consists of 28792 fundus images with three quality grades (“Good”, “Usable”, “Reject”) and has been divided into a training set, a validation set, and a testing set. I noticed that on the EyeQ dataset website, they split the total dataset into training/testing sets, also on the Kaggle challenge website, I didn't find any places explain how they split the dataset.

  2. How long it takes to train your network on your machine?

Thank you!

QtacierP commented 1 year ago

Thanks for your attention.

  1. We followed the DR classification task dataset split in my teammate's work (arxiv.org/pdf/2110.14160.pdf). I think you can just split 20% of the training data as the validation set, which seems also okay.
  2. We trained the total framework for 1day on 8 RTX 2080ti. We observed that a small batch size dose does not influence the performance significantly. You can try training it on fewer GPUs with smaller batch sizes.
QtacierP commented 1 year ago

Since we used InstanceNorm in our backbone, it works much better than BN when your computation resource is limited.