Closed qjianying closed 2 years ago
Hi, we use the verification and test sets that are officially divided by EyePACS, and the partial datasets are obtained by randomly sampling from the training set. Which experiment did you do?
Hi, I have implemented the code of "Lesion-Based Contrastive Learning for Diabetic Retinopathy Grading from Fundus Images" and the "Identifying the key components in ResNet-50 for diabetic retinopathy grading from fundus images: a systematic investigation". I want to know how you divide the val and test sets to 10906:42670. Can you provide a link to me? thank you.
We follow previous EyePACS grading works, the images in the solution file with public usage are considered for validation. Is the kappa of 0.81 the result of transfer capacity evaluation using the entire training set? Did you use eyepacs.yaml as the config file?
Thank you for your answer. I did the experiment with the validation set and test set downloaded from Kaggle. I divided the whole test file into validation set and test set at a ratio of 0.2 and used Eyepacs.yaml as the configuration file. Now I do the experiment again according to your method, hoping to have a higher kappa value. Thank you again for your generous answer. I have no further questions.
You're welcome. If you still can not achieve the reported result, please let us know and we will try to figure it out. Thank you.
Hello, can you tell us how you divide verification and test sets? I have implemented your code, but Kappa is only 0.81.