Hi, thank you for releasing such excellent research on image enhancement. May I ask a few questions about your experiments?
How you split the EyeQ dataset into training/val/testing sets? In your paper, you mentioned that We utilize EyeQ [3] as our first dataset, which consists of 28792 fundus images with three quality grades (“Good”, “Usable”, “Reject”) and has been divided into a training set, a validation set, and a testing set. I noticed that on the EyeQ dataset website, they split the total dataset into training/testing sets, also on the Kaggle challenge website, I didn't find any places explain how they split the dataset.
How long it takes to train your network on your machine?
We followed the DR classification task dataset split in my teammate's work (arxiv.org/pdf/2110.14160.pdf). I think you can just split 20% of the training data as the validation set, which seems also okay.
We trained the total framework for 1day on 8 RTX 2080ti. We observed that a small batch size dose does not influence the performance significantly. You can try training it on fewer GPUs with smaller batch sizes.
Hi, thank you for releasing such excellent research on image enhancement. May I ask a few questions about your experiments?
How you split the EyeQ dataset into training/val/testing sets? In your paper, you mentioned that
We utilize EyeQ [3] as our first dataset, which consists of 28792 fundus images with three quality grades (“Good”, “Usable”, “Reject”) and has been divided into a training set, a validation set, and a testing set.
I noticed that on the EyeQ dataset website, they split the total dataset into training/testing sets, also on the Kaggle challenge website, I didn't find any places explain how they split the dataset.How long it takes to train your network on your machine?
Thank you!