histocartography / zoommil

ZoomMIL is a multiple instance learning (MIL) method that learns to perform multi-level zooming for efficient Whole-Slide Image (WSI) classification.
MIT License
63 stars 6 forks source link

Hello, Kevin, I encountered some issues while using preprocess.py. #12

Open wangxinghangcnn opened 7 months ago

wangxinghangcnn commented 7 months ago

When I tried to open the .h5 file, all the values in the matrix turned out to be zeros. Have you ever encountered a similar problem? 1712629823659

wangxinghangcnn commented 7 months ago

1712647869352 Hello, after my investigation, I found that the data is not zero before entering ResNet50, but becomes all zeros after the first convolutional layer. What could be the reason for this? Any feedback would be greatly appreciated. 1712647908737

wangxinghangcnn commented 7 months ago

Hello, I have now ruled out the issue with model parameter loading. I have ensured that all model parameters are loaded correctly.

kevthan commented 7 months ago

Hi, thank you for your interest in our work. I have not experienced this issue so far, but it could happen when all patches in your whole-slide image are considered as background. See here:

https://github.com/histocartography/zoommil/blob/main/zoommil/utils/preprocessing.py#L244

A patch is considered as background if the fraction of tissue area (after masking) is lower than tissue_thresh. You may be able to fix your problem by adapting your tissue mask parameters (see here).

wangxinghangcnn commented 7 months ago

Thank you very much for taking the time to reply to me. I have tried to observe the two questions you raised, but they have not been resolved yet. I have tried everything from 0 to 1 for tissue_threshold, and I used one of the CRC data for debugging. The parameter tissue_threshold uses your default value of 0.2, but this issue still exists. Can you give me some more suggestions? This is very important to me, thank you very much.

wangxinghangcnn commented 7 months ago

I have now identified the key to the problem. The input_image is all [255255255], which means that the input image is only the background rather than the pathological tissue. How do I determine my optimal parameters to input the correct image.

wangxinghangcnn commented 7 months ago

It's strange that when I used an image from the CAMELYON16 dataset to debug the code, I found that all the matrices in histocartography\preprocessing\feature_extraction.py 466-479

def __call__(self, patch: torch.Tensor) -> torch.Tensor:

""" Computes the embedding of a normalized image input. Args: image (torch.Tensor): Normalized image input. Returns: torch.Tensor: Embedding of image. """ patch = patch.to(self.device) with torch.no_grad(): embeddings = self.model(patch).squeeze() return embeddings of the code were also 0 matrices, which made me very confused. My parameters are completely based on the original parameters you provided.