Zhengyushan / kat

The code for Kernel attention transformer (KAT)
MIT License
28 stars 5 forks source link

About the extraction of patches using 'cnn_sample' #4

Open joe19981 opened 1 year ago

joe19981 commented 1 year ago

When I run 'cnn_sample', the majority of the 500 patches extracted from each Whole Slide Image (WSI) are blank. Is there an issue with this?

Zhengyushan commented 1 year ago

The positions of the patches for extraction are calculated based on the foreground mask of the slide. The magnification (or level) of the mask needs to fit the hyper-parameter 'MASK_LEVEL' in the config file. You may try to tune 'MASK_LEVEL', to the exact LEVEL of your foreground mask.