Closed snigdhaAgarwal closed 3 years ago
Hi,
So the mask that is generated during inference is the tissue mask. You shouldn't try and modify that to get the binary cell output. Like you already suggested, you should be able to use the json file output to binarise the image if you wish. Everything will be there that you need. However, you should note that saving the image at 40x will take up a lot of memory and there may be issues. This is why we have saved the output as json. Alternatively you can process image tiles and then binarisation may be a bit more straight forward. Let me know if you need more assistance.
Thanks for the prompt reply! I used the tutorial here: https://github.com/vqdang/hover_net/blob/master/examples/usage.ipynb to get the binary file. The code I used is below for anyone who has a similar use case.
from misc.viz_utils import visualize_instances_dict
import numpy as np
import json
from skimage import io
import cv2
import zarr
f = open('/mnt/ibm_sm/home/snigdha/TSP14 UB B2.json’)
data = json.load(f)
type_info = {"0" : ["nolabe", [255,255,255]]} # all white
for i in data['nuc']:
contour = data['nuc'][str(i)]['contour']
x_s = [a_tuple[0] for a_tuple in contour]
y_s = [a_tuple[1] for a_tuple in contour]
coords = np.empty((len(x_s),2))
coords[:,0] = x_s
coords[:,1] = y_s
tile_info_dict[i] = {'contour': np.int0(coords), 'type':'0’}
# shape of the max mag level in svs
r_mask = np.zeros((67343, 143424), np.uint8)
overlaid_output = visualize_instances_dict(r_mask, tile_info_dict, type_colour=type_info,line_thickness=cv2.FILLED)
zarr.save('example.zarr',overlaid_output)
Saving as a png was taking a lot of time and it was also very huge. So I saved it as a zarr instead and this really helped me save space and time!
I need the binary version of a large svs format cell image. I am using run_infer.py because it generates a mask. However the output in mask is a binary image at the maximum zoomed out level whereas I need a binary for the maximum "zoomed in" level. The other option I'm exploring is to use the output json and use contour information to build a binary image. Is there some other way to directly get a binary? If not, is there a possibility of there being a feature in the future?
I also tried modifying the infer/wsi.py code to produce a mask for each magnification level as seen below:
Though the binaries for the 3 lower magnification levels look good, the one for magnification 40 is completely black(after removing the remove_small_holes and remove_small_objects function calls) or completely white. Basically cells are not being identified. Can you point out what I'm doing wrong here or as the comment says this piece of code is meant for generating tissue level binaries only?