Closed weiaicunzai closed 10 months ago
Hi, thanks for attention.
This code and parameters are used to preprocess the cam16:
python create_patches_fp.py --source DATA_DIRECTORY --save_dir RESULTS_DIRECTORY --patch_size 512 --step_size 512 --preset bwh_biopsy.csv --seg --patch
.
This codes come from DTFD. I didn't use this part of the codes, I only used CLAM's API to preprocess all the datasets.
This is the segmented image for normal_027:
I hope this will address your issue.
I hope this will address your issue.
Thanks, guess I have forgot to add bwh_biopsy.csv
parameters in the command.
After adding the bwh_biopsy.csv
parameter, here is what I got:
I didn't even noticed this parameter, I thought something wrong with the CLAM api, so I wrote my own function to segment the tissue areas, gives me something like this. I guess my code is not as good as the CLAM api. I have wasted too many time on this.
Another follow-up question: I have noticed that you extract images from 40x magnification with patch size 512 x 512 on cam16 dataset. I'm assuming you will resize the 512x512 to 256x256 before sending to the network to extract patch features . Why not just directly extracting the 256x256 features from 20x magnification like many other works would do? Does extracting patches from 512x512 at 40x mag gives more performance boost?
Thank you very much, you really help me a lot.
I hope this will address your issue.
Thanks, guess I have forgot to add
bwh_biopsy.csv
parameters in the command. After adding thebwh_biopsy.csv
parameter, here is what I got:I didn't even noticed this parameter, I thought something wrong with the CLAM api, so I wrote my own function to segment the tissue areas, gives me something like this. I guess my code is not as good as the CLAM api. I have wasted too many time on this.
Another follow-up question: I have noticed that you extract images from 40x magnification with patch size 512 x 512 on cam16 dataset. I'm assuming you will resize the 512x512 to 256x256 before sending to the network to extract patch features . Why not just directly extracting the 256x256 features from 20x magnification like many other works would do? Does extracting patches from 512x512 at 40x mag gives more performance boost?
Thank you very much, you really help me a lot.
I think this is an agreed and compromised implementation, which has been mentioned several times in the CLAM and TransMIL repository issues. Mainly because level=1 isn't x20 on all slides, and level=0 and level=1 aren't always 2x the difference in length and width. For more details on this check out the CLAM repository issue.
I hope this will address your issue.
Thanks, guess I have forgot to add
bwh_biopsy.csv
parameters in the command. After adding thebwh_biopsy.csv
parameter, here is what I got: I didn't even noticed this parameter, I thought something wrong with the CLAM api, so I wrote my own function to segment the tissue areas, gives me something like this. I guess my code is not as good as the CLAM api. I have wasted too many time on this. Another follow-up question: I have noticed that you extract images from 40x magnification with patch size 512 x 512 on cam16 dataset. I'm assuming you will resize the 512x512 to 256x256 before sending to the network to extract patch features . Why not just directly extracting the 256x256 features from 20x magnification like many other works would do? Does extracting patches from 512x512 at 40x mag gives more performance boost? Thank you very much, you really help me a lot.I think this is an agreed and compromised implementation, which has been mentioned several times in the CLAM and TransMIL repository issues. Mainly because level=1 isn't x20 on all slides, and level=0 and level=1 aren't always 2x the difference in length and width. For more details on this check out the CLAM repository issue.
Yes, I have checked the CLAM repository issue and the HIPT issue. Some slides in TCGA database only have 20x magnification. My point is, for slides with only 40x magnification, extract patches with size 512x512 is fine, we can resize the patch size to 256x256 to obtain the 20x magnification later. However, for slides with only 20x magnification, extract patches with size 512x512 would cause the number of extracted patches 4 times less than extracting patch size with 256x256. Will the number of patches extracted per slide affect the overall performance? I wrote a simple function to check if the slide level 0 is 20x or 40x, then apply the different extracting patch_size (40x use 512, 20x use 256 ) to different slides, it is necessary?
And I use this code snippet I used to check the mag level 0 of a slide. Is there something wrong with my code?
if 'aperio.AppMag' in wsi.properties.keys():
level_0_magnification = int(float(wsi.properties['aperio.AppMag']))
elif 'openslide.mpp-x' in wsi.properties.keys():
level_0_magnification = 40 if int(float(wsi.properties['openslide.mpp-x']) * 10) == 2 else 20
else:
if max(wsi.level_dimensions[0]) < 50000:
level_0_magnification = 20
else:
level_0_magnification = 40
And I use this code snippet I used to check the mag level 0 of a slide. Is there something wrong with my code?
if 'aperio.AppMag' in wsi.properties.keys(): level_0_magnification = int(float(wsi.properties['aperio.AppMag'])) elif 'openslide.mpp-x' in wsi.properties.keys(): level_0_magnification = 40 if int(float(wsi.properties['openslide.mpp-x']) * 10) == 2 else 20 else: if max(wsi.level_dimensions[0]) < 50000: level_0_magnification = 20 else: level_0_magnification = 40
I think this might be a more effective way, and it is worth trying. But in my previous work and papers, I only used a simple strategy of using level-0 and then downsampling. As for the correctness of your code, I cannot judge whether it is correct in pathology, because I do not have professional pathology knowledge, such as whether the value of “50000” is appropriate.
And I use this code snippet I used to check the mag level 0 of a slide. Is there something wrong with my code?
if 'aperio.AppMag' in wsi.properties.keys(): level_0_magnification = int(float(wsi.properties['aperio.AppMag'])) elif 'openslide.mpp-x' in wsi.properties.keys(): level_0_magnification = 40 if int(float(wsi.properties['openslide.mpp-x']) * 10) == 2 else 20 else: if max(wsi.level_dimensions[0]) < 50000: level_0_magnification = 20 else: level_0_magnification = 40
I think this might be a more effective way, and it is worth trying. But in my previous work and papers, I only used a simple strategy of using level-0 and then downsampling. As for the correctness of your code, I cannot judge whether it is correct in pathology, because I do not have professional pathology knowledge, such as whether the value of “50000” is appropriate.
Thanks, I noticed that the TCGA BRAC dataset, some slides are missing magnification information. So, I looked up other slides contains the magnification information, found out that slides with 40x mag often have larger resolution than 50000 at level 0. Therefore, 5000 is just an empirical value.
Hi, thanks for your great work, could you please tell me how do you preprocessed cam16 dataset?
In the paper, I have noticed that you claim to use CLAM api to extract patch features. However, CLAM api does not perform well on cam16 dataset.
For example, I have stored the segmentation mask of CLAM api (default parameters), and for normal_027 slide, it gives zero tissue region:
So could please tell me how do you segment the tissue areas and extracting patch level features? Did you use this code?