Hello,
I encountered an issue when running create_patches.py.
Part of the error is the following:
progress: 0.50, 1/2
processing Slide12.svs
**level_dim 82033 x 99067 is likely too large for successful segmentation, aborting**
average segmentation time in s per slide: 0.0
average patching time in s per slide: 0.0
average stiching time in s per slide: 0.0
initializing dataset
loading model checkpoint
I tried adjusting the seg_level parameter of seg_params in the .csv file where I've specified all the parameters. But nothing seemed to work. May you have any advice how to fix this?
Is this error due to the resolution being too high for segmentation, and so we would need to downsample the image first before doing segmentation?
Hello, I encountered an issue when running create_patches.py. Part of the error is the following:
I tried adjusting the seg_level parameter of seg_params in the .csv file where I've specified all the parameters. But nothing seemed to work. May you have any advice how to fix this? Is this error due to the resolution being too high for segmentation, and so we would need to downsample the image first before doing segmentation?
Thank you, Lisa