Negin-Ghamsarian / Transformation-Invariant-Self-Training-MICCAI2023

MIT License
4 stars 3 forks source link

About result in OCT #3

Open qiaoqiangPro opened 10 months ago

qiaoqiangPro commented 10 months ago

Hello, negin. My Test_Dice is about 30%, but your paper say 40.0%, I want to know what wrong in my practice. 1704187023149

Negin-Ghamsarian commented 10 months ago

Hello! You should get results very close to the ones reported in the paper if you use the same settings (Section 3). The hyperparameters can be found and changed in the config file. For instance, if you do not set hard_label_thr = 0.85, you can expect degraded performance compared to the reported results in Table 1.

qiaoqiangPro commented 10 months ago

Hello! You should get results very close to the ones reported in the paper if you use the same settings (Section 3). The hyperparameters can be found and changed in the config file. For instance, if you do not set hard_label_thr = 0.85, you can expect degraded performance compared to the reported results in Table 1.

Thank you very much for your answer, I am implementing your data division exactly as it is in Config_ENCORE_AugLoss_UnsupOnTrain_T85.py, I pushed a request aimed at fixing the division of the data into different folds, if I follow the source code, it will make the second fold less than the first fold, and the third fold less than the second fold, and this will have an impact on the results of my reproduction as well, I need your help!

Negin-Ghamsarian commented 10 months ago

As I mentioned in response to your previous question, you should use the TrainIDs provided under TrainIDs_RETOUCH_DA in case you would like to reproduce our results.

qiaoqiangPro commented 10 months ago

As I mentioned in response to your previous question, you should use the TrainIDs provided under TrainIDs_RETOUCH_DA in case you would like to reproduce our results.

Yeah, you're right. Thank you for your help. I followed the scheme you provided to divide, there is a slight difference in the number of data in the CSV file after division, I found that the IRF file after dataset division for SpectralisVsTopcon4_1.csv has the patient TRAIN047, but there is a slice of TRAIN047_11 in my file, but not in your csv file. viewed the original image and there is mask, this may be the reason for the inconsistent number with you, do you know why this is?

image

qiaoqiangPro commented 10 months ago

As I mentioned in response to your previous question, you should use the TrainIDs provided under TrainIDs_RETOUCH_DA in case you would like to reproduce our results.

Yeah, you're right. Thank you for your help. I followed the scheme you provided to divide, there is a slight difference in the number of data in the CSV file after division, I found that the IRF file after dataset division for SpectralisVsTopcon4_1.csv has the patient TRAIN047, but there is a slice of TRAIN047_11 in my file, but not in your csv file. viewed the original image and there is mask, this may be the reason for the inconsistent number with you, do you know why this is?

image

Oh, make them in one image as follow: image

Negin-Ghamsarian commented 10 months ago

TRAIN047_11

This case exists in SpectralisVsTopcon4_1.csv at line 246.

qiaoqiangPro commented 10 months ago

TRAIN047_11

This case exists in SpectralisVsTopcon4_1.csv at line 246.

Oh sorry,This was an oversight on my part, I used the data division in your TrainIDs_RETOUCH_DA folder and did the training again, the results were still unsatisfactory, in which I checked the Config_ENCORE_AugLoss_ under the folderconfigs_RETOUCH_DA_scSENet_ST4. UnsupOnTrain_T85.py code, I simply set image_transforms = Compose([RandomApply_Customized([ ColorJitter(brightness=0.7, contrast=0.7, saturation=0.7, hue=0)...

Instead of ColorJitter(brightness=0.6, contrast=0.6, saturation=0.4, hue=0) in your example file, I will recover your settings and try again, currently the dice for quadruple fold is 29%... I think there is a lot of difference and need your help. image