miccaiif / WENO

Official PyTorch implementation of our NeurIPS 2022 paper: Weakly Supervised Knowledge Distillation for Whole Slide Image Classification
58 stars 11 forks source link

ResNet encoder in WENO #3

Open cswpy opened 1 year ago

cswpy commented 1 year ago

First of all, thank you for your work. I was checking the code after reading the paper and have some doubts regarding the implementation.

  1. The paper said that all datasets except for CIFAR-10-MIL used ResNet18. However, it seems that the CAMELYON model used camelyon_feat_projecter from AlexNet as the encoder, which is a linear layer with BatchNorm. I understand that the features were already extracted by SimCLR. Does this mean by the encoder in this case is just the feature_projecter? Additionally, the dataset_CAMELYON16_BasedOnFeat seems to be filtering patches, i.e. only considering positive patches in positive bags and negative patches in negative bags. Is there a reason behind that?
  2. I am a bit confused about loading the pre-trained features part in dataset_CAMELYON16_BasedOnFeat.py. Specifically, lines 74-77 are shown below. https://github.com/miccaiif/WENO/blob/baf0d8fe97061d9844ddb628bcf8f8e994685ff1/Datasets_loader/dataset_CAMELYON16_BasedOnFeat.py#L74-L77

Also, I see that you are reading the pre-trained features on line 90. Could you clarify about the file structure of those files? (file formats, naming, directory structures, etc.)

  1. ResNetv2 is implemented in the codebase but not used in the paper, is there a reason behind this?
  2. The DSMIL+WENO implementation has SimCLR in it. If I understood correctly, the WENO framework will share a pre-trained encoder and will continue to update the weights in the encoder. I am considering using a pre-trained ResNet50 inside WENO, do you think this is a good idea?

I know it's a lot of questions. But it would be great if you could answer them. Again, thank you for your contributions!