invoker-LL / WSI-finetuning

This is the official repository for our CVPR 2023 paper 'Task-Specific Fine-Tuning via Variational Information Bottleneck for Weakly-Supervised Pathology Whole Slide Image Classification'.
78 stars 14 forks source link

Inquiry about Missing Slides when Patching #7

Closed bryanwong17 closed 11 months ago

bryanwong17 commented 11 months ago

Hi, I tried following your code to create patches by using bash create_patches.sh. However, there are two missing slides (normal_027 and normal_045), and as a result, I could only get 397 instead of 399 on Camelyon16 dataset. Do you perhaps know why?

Thank you!

invoker-LL commented 11 months ago

These 2 slides are of very few tissue areas, thus may be filtered by operations (like otsu) during pre-processing. But the otsu is useful during reproduce the result of CLAM.

bryanwong17 commented 11 months ago

So do you have any suggestions in order to reproduce the result? how did you deal with this issue?

invoker-LL commented 11 months ago

You can just omit these 2 slides during training, which will get about similar AUC score compared to CLAM. But if the OTSU is not included during preprocess, I can only get an AUC score of 0.82-0.83. (I am not quite sure about the details in their experiments either, or you may try the preprocess in DTFD?) Above results are mainly caused by the weak embedding of ImageNet pretraining, if your use pretraining embedding like lunit, which may not be a question. Though I think there will also be some improvement using our fine-tuning based on lunit embedding, but Camelyon-16 may easily meet its upper-bound, like an AUC of 0.96-0.97, thus the improvement may be slightly.

bryanwong17 commented 11 months ago

Got it. Thank you for your help!