baidu-research / NCRF

Cancer metastasis detection with neural conditional random field (NCRF)
Apache License 2.0
757 stars 184 forks source link

Cannot get correct FROC with resnet18 baseline ckpt #54

Closed xray-pku closed 3 years ago

xray-pku commented 3 years ago

I tried to calculate FROC using your resnet18_base.ckpt, and I got: Avg FP = 0.25 Sensitivity = 0.6061946902654868 Avg FP = 0.5 Sensitivity = 0.6769911504424779 Avg FP = 1 Sensitivity = 0.7345132743362832 Avg FP = 2 Sensitivity = 0.7876106194690266 Avg FP = 4 Sensitivity = 0.8185840707964602 Avg FP = 8 Sensitivity = 0.8628318584070797 Avg Sensivity = 0.7477876106194691 I have excluded the test_114.tif in the test set, but there is a gap between my results and the paper said (0.7825). But I got the correct FROC using resnet18_crf.ckpt. If the resnet18 baseline ckpt given in the project is same as which you used to calculate the FROC in paper? Thanks a lot.

yil8 commented 3 years ago

@Xray0218 Hi, thanks for your interest in our work. I'm curious how did you obtain the test_*.tif? I remember the competition organizers have removed those files...

xray-pku commented 3 years ago

@yil8 Thanks for your reply! I got the CAMELYON16 training and testing wsi from this google drive share link: https://drive.google.com/drive/folders/0BzsdkU4jWx9BRWNMb2IwUjhENXM?resourcekey=0-i0G5pCeI2Hf6SwGOQ3eeDQ And it has a zip file named "lession_annotations.zip", I extracted ground thruth xml files from this zip. Then I converted the xml files to the ground thruth masks, using this code:

reader = mir.MultiResolutionImageReader()
mr_image = reader.open(tiff_file)
assert mr_image is not None
annotation_list = mir.AnnotationList()
xml_repository = mir.XmlRepository(annotation_list)
xml_repository.setSource(xml_file)
xml_repository.load()
annotation_mask = mir.AnnotationToMask()
camelyon17_type_mask = False
monitor=mir.CmdLineProgressMonitor()
annotation_mask.setProgressMonitor(monitor)
label_map = {'metastases': 255, 'normal': 0} if camelyon17_type_mask else {'_0': 255, '_1': 255, 'Tumor':255, '_2': 0, 'None':0,'Exclusion':0}
conversion_order = ['metastases', 'normal'] if camelyon17_type_mask else  ['_0', '_1', 'Tumor','_2','None','Exclusion']
annotation_mask.convert(annotation_list, output_path, mr_image.getDimensions(), mr_image.getSpacing(), label_map, 
conversion_order)

It's so strange because I can get correct FROC(but not the same, it's 0.7876,and your paper is 0.7934) with the resnet18_crf.ckpt in your repo. But my baseline FROC is lower than your paper said.

yil8 commented 3 years ago

@Xray0218 Is this google drive for the test images or the ground truth masks? https://drive.google.com/drive/folders/0BzsdkU4jWx9BWk11WEtZZUNFY0U?resourcekey=0-U0E7SyHPJeQd77VAi3z15Q If I remember correctly, the Evaluation_FROC.py directly use the ground truth masks instead of the xml annotations. https://github.com/baidu-research/NCRF#froc-evaluation

xray-pku commented 3 years ago

@yil8 There are only xml annotations in the google drive, I can't get the ground truth mask directly. So I converted the xml annotations to ground truth masks using the code above. Maybe there's something wrong about my code, could u send me the ground truth mask of test_026? And my email is xurui@stu.pku.edu.cn. Thanks a lot!

yil8 commented 3 years ago

@Xray0218 sent

xray-pku commented 3 years ago

Thanks for your patience, I just checked the ground truth mask you sent, it's 1.4GB. But the mask I got from the code above, is about 40MB, maybe there's something wrong with my code, and I will try to find it out.