YijinHuang / Lesion-based-Contrastive-Learning

This is the official implementation of the paper Lesion-based Contrastive Learning for Diabetic Retinopathy Grading from Fundus Images.
39 stars 6 forks source link

About thresholds in classification #4

Closed cs1151142690 closed 2 years ago

cs1151142690 commented 2 years ago

Nice work I must to say,and I have one question I don`t understand. I notice that you set a threshold manually when you do the classification task (as following), and I wonder why is it like this ? Is it a fixed rule or this performs best in you work?

self.thresholds = [-0.5 + i for i in range(num_classes)] if not thresholds else thresholds

你好,我觉得这项研究很有价值,然后我尝试复现了你们的工作,有个问题在你们用MSE做分类的时候出现了一个阈值,请问这个阈值是MSE通用的吗?还是说这个阈值结果最好呢?谢谢

YijinHuang commented 2 years ago

Thank you for your interest. This threshold is not fixed and can be adjusted based on the prediction results of the validation set. We tried it on EyePACS and got a test kappa improvement of about 0.1-0.2. This improvement is not significant, so we simply set the threshold to the middle of every two categories.

cs1151142690 commented 2 years ago

Thanks for your reply. I've tried your classification model, when I load your model ,kappa is almost as high as you mentioned , but when I use other unsupervised methods (using eyepacs.yaml), kappa is quite low (only 6%, sometimes negative). And I tried supervised model, kappa is still low(really strange I think) . I want to ask how you do the supervised methods mentioned in the paper , just set 'pretrain' to be True and don't load the comparative learning model(that`s how I do)? Thanks in advance!

YijinHuang commented 2 years ago

Yes, you are right. Did you use the repo "pytorch-classification" of commit version bddd0a0? I think that might be the problem. In that version, we transferred the model library from torchvision to timm. We observed that using the ResNet50 from timm with eyepacs.yaml, we got a low kappa of 73.23% but not the very low kappa you got. If you are using that version, please try to switch the repo to the commit ffc342f. We got the expected kappa using that version with eyepacs.yaml in our machine. If not, please tell us more details about your training, so we can try to find the problem. Also, we are going to fix the problem in the new version of the code as soon as possible. Thank you for helping us find out this issue.

cs1151142690 commented 2 years ago

I am really sorry to bother you again, I processed the dataset in your way and I re-do the contrative model part with the batchsize is changed to 240(due to the limitation of GPU), the epoch is reduced to 500, and other parameters are not changed. The model collapsed (kappa only 1% in classification and the y_pred for each image is almost the same).That`s really strange. As far as I know, comparative learning is rarely used in the fundus images. Have you ever had a model collapse in your training process? Thanks in advance!

YijinHuang commented 2 years ago

Please feel free to ask me any questions. What is the input resolution for contrastive learning and classification training? I never meet collapsing problem in my experiments, and the minimum batch size I have tried is 512. As reported in the SimCLR, contrastive learning is very sensitive to batch size. I will recommend you reduce the input resolution and increase the batch size to validate the correctness of the experimental setup.

By the way, did the commit version ffc342f of "pytorch-classification" solve your previous problem? Thank you.

cs1151142690 commented 2 years ago

Thanks, Image size is 512 after I processed, and input size in CL is 128 and it is 512 in classification(the same with download files). And the solution ffc342f does work in my experiments. Next I will do more experiments as you recommand, hope it will not collapse again.I will reply you if it works.Thanks for your help.