emma-sjwang / pOSAL

Code for TMI paper: Patch-based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation
MIT License
51 stars 19 forks source link

Coding missing~ #5

Closed Just-do-it-LB closed 3 years ago

Just-do-it-LB commented 4 years ago

I‘am very interested in your perfect work. After scaning it, I found some important code missing.

In the file of test_DGS.py, Line 22, from Utils.utils import save_img, save_per_img Line 23, from Utils.evaluate_segmentation import evaluate_segmentation_results

save_per_img and evaluate_segmentation_results are missing implementation. I'am looking forward to receive your updating

emma-sjwang commented 4 years ago

Please refer to this file https://github.com/EmmaW8/BEAL/blob/master/utils/Utils.py and this one https://github.com/ignaciorlando/refuge-evaluation/blob/master/evaluation_metrics/evaluation_metrics_for_segmentation.py

emma-sjwang commented 4 years ago

The original codes have been deleted since there is a long time after this project, Hope these links could help you.

Just-do-it-LB commented 4 years ago

Thanks much. But I'am sorry to find another missing.

This one [https://github.com/ignaciorlando/refuge-evaluation/blob/master/evaluation_metrics/evaluation_metrics_for_segmentation.py]

Line 7: from util.file_management import get_filenames, save_csv_mean_segmentation_performance, save_csv_segmentation_table

file_management.py is missed.

emma-sjwang commented 4 years ago

You can learn from this the official evaluation code of REFUGE Challenge: https://github.com/ignaciorlando/refuge-evaluation What you cannot find is in the parent folder. My code is also inherited from this official evaluation code. Sorry that my original codes have been deleted.

Just-do-it-LB commented 3 years ago

Thanks much. Furthermore, could you provide me the weight of the refuse/DA_patch/Generator/generator_100.h5? I want to witness the perfect preformance of your model. Is it weights1.h5 ?

Just-do-it-LB commented 3 years ago

It would be my honor to get your help. Could you tell me why and how to make five generators to get the final result? Thanks

emma-sjwang commented 3 years ago

Hi,

For the weight files, they were generated by 5 fold cross-validation. So I cannot say which one is best and didn't test it (the training data is different for each model).

Do you mean the model ensembling way? It is shown in predict.py

Just-do-it-LB commented 3 years ago

Your code of predict.py is very clear and I have maked it clear. 5 fold cross-validation is my answer. Thanks

In addition, in the file of pOSAL\Utils\data_generator.py and function of GD_Gene(...), the code of if source: label = 0 else: label = 1 ... Y2_train[batch_index, :,:,0] = np.zeros([16, 16]) + smoothlabel

the above code is correct? In the file of train_DGS.py, the code of trainDS_Gene = GD_Gene(batch_size, './data/' + dataset + '/train0/disc_small', True, CDRSeg_size=CDRSeg_size, phase='train', noise_label=False)

"True" will make GD_Gene generator the fake zero label of [16, 16, 1] . But the section of train0 should be real one label of [16, 16, 1] ?

An-BN commented 3 years ago

I tried but couldn't rewrite two function save_per_image() and evaluate_segmentation_results() for test phase in pOSAL. Can anyone give me some help on this. Thanks much!!

emma-sjwang commented 3 years ago

Hi, please refer to https://github.com/EmmaW8/pOSAL/blob/master/Utils/utils.py#L61 for save_per_image() .

The evaluate_segmentation_results() is actually same as the https://github.com/ignaciorlando/refuge-evaluation/blob/master/evaluation_metrics/evaluation_metrics_for_segmentation.py.

An-BN commented 3 years ago

You’re the best. Thank you so much!

An-BN commented 3 years ago

Hi, I use this topic for my project at the university so it very import important to me. Can you tell me how to get the CDR=VC/VD value and how to using it to classification of clinical Glaucoma. I am forever grateful that you can help me.

emma-sjwang commented 3 years ago

Hello, The CDR calculation is inherited from another project: https://github.com/HzFu/MNet_DeepCDR/blob/master/mnet_deep_cdr/Step_4_CDR_output.m

You refer to their MATLAB code.

emma-sjwang commented 3 years ago

Your code of predict.py is very clear and I have maked it clear. 5 fold cross-validation is my answer. Thanks

In addition, in the file of pOSAL\Utils\data_generator.py and function of GD_Gene(...), the code of if source: label = 0 else: label = 1 ... Y2_train[batch_index, :,:,0] = np.zeros([16, 16]) + smoothlabel

the above code is correct? In the file of train_DGS.py, the code of trainDS_Gene = GD_Gene(batch_size, './data/' + dataset + '/train0/disc_small', True, CDRSeg_size=CDRSeg_size, phase='train', noise_label=False)

"True" will make GD_Gene generator the fake zero label of [16, 16, 1] . But the section of train0 should be real one label of [16, 16, 1] ?

@Just-do-it-LB Sorry, I missed your comments. In our code, we use True/False (1/0) to represent Source/Target domain, respectively, through trainDS_Gene/trainDT_Gene.

To minimize the domain gap, in the adversarial process, we give trainAdversarial_Gene label as True/1, which makes Target domain data transferring from 0 to 1. This will confuse the discriminator and let it can't distinguish target data from source data.

Thank you.

emma-sjwang commented 3 years ago

hey, guys.

Sorry for the missing codes. I have fixed them in this version. You may try it. If you have any other questions, please open a new issue. I will close this one.