vqdang / hover_net

Simultaneous Nuclear Instance Segmentation and Classification in H&E Histology Images.
MIT License
537 stars 224 forks source link

Type mapping #59

Closed zerodohero closed 4 years ago

zerodohero commented 4 years ago

hello. The first problem is that in the type mapping, the same type of cells will be connected, as shown in the figure below image

zerodohero commented 4 years ago

Second question, this graph shows the actual two mapping images shown below, the prediction above, and you can see that cells that should be of the same kind are labeled in different colors in different images

zerodohero commented 4 years ago

20200905144826

simongraham commented 4 years ago

First of all, the cells should not be connected, which suggests that something is not right here. Did you run process.py after running infer.py to get the instances? It appears that you have a different coloured overlay to what we used in the code too. Did you make some modifications? Different colours denote different nuclei categories. It is normal for there to be different categories in the circled areas. For example, the regions in black may contain fibroblasts and inflammatory cells. I'm not sure if I completely answered your question, but please be absolutely clear what you would like us to help you with.

zerodohero commented 4 years ago

I run process.py after running infer. First of all, what is generated in your code is a map of segmentation mapping, but not a map of cell type mapping. The problem with the first image is that in the cell mask, the same type of cell is represented by the same number, so that the neighbors are the same The cells will be connected into one piece, and the program uses numbers to distinguish the boundaries of the cells, so that it is naturally impossible to distinguish adjacent cells

zerodohero commented 4 years ago

The second problem, I have solved it, is achieved by restricting the same class to use the same color. I used a dictionary color = {0:[255, 0, 0], 1:[0, 255, 255], 2:[0, 119, 102], 3:[0, 255, 0], 4: [0, 0, 255]}

zerodohero commented 4 years ago

For the first question, it is worth noting that your code cannot generate cell type mapping.,(As in the image below) The part of cell mapping is only used for segmentation in my imitating code: **###############insteance map

    pred_inst = remap_label(pred_inst, by_size=True)
    overlaid_output = visualize_instances(pred_inst, img)
    overlaid_output = cv2.cvtColor(overlaid_output, cv2.COLOR_BGR2RGB)
    cv2.imwrite('%s/%s.png' % (proc_dir, basename), overlaid_output)**

Below is the code I added: np.savetxt(proc_dir_type+basename+'.csv', pred_type, delimiter=',') overlaid_output = visualize_instances_type(pred_type,proc_dir_type,basename,dicts, img,) overlaid_output = cv2.cvtColor(overlaid_output, cv2.COLOR_BGR2RGB) cv2.imwrite('%s/%s.png' % (proc_dir_type, basename), overlaid_output)

image The problem lies in this. It divides the image based on numbers. If the cells of the same type have the same number, the result will be connected together. inst_map = np.array(mask == inst_id, np.uint8) image

zerodohero commented 4 years ago

I have been studying for a long time, but there is still no solution。

contours = cv2.findContours(inst_map_crop, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE) And I found this same result is a bit biased.

contours[1] outputs the coordinates of the object, but sometimes the number of coordinates outputs is different from the number of his contours.

The output below is a contour, but print("count",np.size(contours[1])), the output is 226 image

zerodohero commented 4 years ago

I have found these questions so far and I look forward to your answers

vqdang commented 4 years ago

Please check the #58 first, the above snippet has same problem and it broke other results.

The visualize_instance is written to just loop over each detected nuclei, and plot each contour individually. At https://github.com/vqdang/hover_net/blob/8582def4a407b89e3eca475607a67493f28ee589/src/misc/viz_utils.py#L53, per design of the code, there should be 1 and only 1 contour here (i.e 1 nuclei). And because of #58 you have many blob sharing same ID, hence you broke the code. You can see this right on your above figure when the inst_map displayed as many blobs, not a single blob.

zerodohero commented 4 years ago

Yes, you say is your code of segmentation, as each cell has a unique number, but for the cell types of the map, the results of the prediction is a kind of cells is a number, so there will be the first question I mentioned before, the cell adhesion together cannot draw the outline of each cell

vqdang commented 4 years ago

Carry on from your latest respond on #58 . Please check above reply, whatever you modified are still wrong because the inst_map should not look like that. Also 20200907_161522 You are writing your own function, you must detail the code and the input if you want us to assist.

zerodohero commented 4 years ago

np.savetxt(proc_dir_type+basename+'.csv', pred_type, delimiter=',') overlaid_output = visualize_instances_type(pred_type,proc_dir_type,basename,dicts, img,) overlaid_output = cv2.cvtColor(overlaid_output, cv2.COLOR_BGR2RGB) cv2.imwrite('%s/%s.png' % (proc_dir_type, basename), overlaid_output) Some of this code is added because I want to output some results to validate my input. It's equivalent to the following overlaid_output = visualize_type_type(pred_type,img,color=[[255,0,0], [0, 255, 255],[0, 119, 102],[0,255,0], [0,0,255]]) overlaid_output = cv2.cvtColor(overlaid_output, cv2.COLOR_BGR2RGB) cv2.imwrite('%s/%s.png' % (proc_dir_type, basename), overlaid_output)

visualize_type_type() is same as visualize_instances() in hover_net/src/misc/viz_utils.py

I haven't changed anything else hover_net/src/misc/viz_utils.py I haven't changed this file except to output some results for verification

zerodohero commented 4 years ago

So my question is very simple, how do you map the cells and make sure that the cells are individually separated

zerodohero commented 4 years ago

just like right image

vqdang commented 4 years ago

Something as follow should do, change the color as you see fit

type_color_dict = {
  1 : (255, 0, 0),
  2 : (0, 255, 0),
  3 : (0, 0, 255),
  4 : (255, 0, 255)
}
pred_data = sio.loadmat('sample.mat')
pred_type_list = np.squeeze(pred_data['inst_type'])
pred_inst_map = pred_data['inst_map']
nuc_color = [type_color_dict[v] for v in pred_type_list]
overlaid = visualize_instance(pred_inst_map, img, nuc_color)
zerodohero commented 4 years ago
import glob
import os

import cv2
import numpy as np
import scipy.io as sio
from scipy.ndimage import filters, measurements
from scipy.ndimage.morphology import (binary_dilation, binary_fill_holes,
                                      distance_transform_cdt,
                                      distance_transform_edt)
from skimage.morphology import remove_small_objects, watershed

import postproc.hover
import postproc.dist
import postproc.other

from config import Config

from collections import Counter
from misc.viz_utils import visualize_instances
from misc.utils import get_inst_centroid
from metrics.stats_utils import remap_label

###################

# TODO:
# * due to the need of running this multiple times, should make
# * it less reliant on the training config file

## ! WARNING:
## check the prediction channels, wrong ordering will break the code !
## the prediction channels ordering should match the ones produced in augs.py

cfg = Config()

# * flag for HoVer-Net only
# 1 - threshold, 2 - sobel based
energy_mode = 2
marker_mode = 2

pred_dir = cfg.inf_output_dir
proc_dir = pred_dir + '_proc/'
print(proc_dir)
proc_dir_type=pred_dir + '_proc2'
proc_dir2 = pred_dir + '_proc/'+"type"
file_list1 = glob.glob('%s/*.mat' % (proc_dir))
file_list1.sort() # ensure same order
def process():
    for filename in file_list1:
        filename = os.path.basename(filename)
        basename = filename.split('.')[0]
        print(proc_dir, basename, end=' ', flush=True)

        ##
        img = cv2.imread(cfg.inf_data_dir + basename + cfg.inf_imgs_ext)
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        print("s", np.array(img).shape)  ##1000*1000*3
        #print(ss)

        pred_data = sio.loadmat('%s/%s.mat' % (proc_dir, basename))
        #print(pred_data)
        type_color_dict = {
        0:(255, 255, 255),
            1: (255, 0, 0),
            2: (0, 255, 0),
            3: (0, 0, 255),
            4: (255, 0, 255)#  Counter({4: 558, 3: 218, 2: 90, 0: 1})  Because the last number in pred_data['inst_type'] is 0, there must be 0
        }
        pred_type_list = np.squeeze(pred_data['inst_type'])

        pred_inst_map = pred_data['inst_map']
        #print(pred_type_list)
        print(Counter(pred_type_list))####Counter({4: 558, 3: 218, 2: 90, 0: 1})
        nuc_color = [type_color_dict[v] for v in pred_type_list]
        overlaid = visualize_instances(pred_inst_map, img, nuc_color)
        overlaid_output = cv2.cvtColor(overlaid, cv2.COLOR_BGR2RGB)
        cv2.imwrite('%s/%s.png' % (proc_dir2, basename), overlaid_output)
if __name__=="__main__":
    process()

visualize_instances :::There are no changes to the visualize_instances ,but it is not work

Traceback (most recent call last): File "D:/Users/lzxie/HRclass/hover_net-master/hover_net-master/src/type.py", line 86, in process() File "D:/Users/lzxie/HRclass/hover_net-master/hover_net-master/src/type.py", line 82, in process overlaid = visualize_instances(pred_inst_map, img, nuc_color) File "D:\Users\lzxie\HRclass\hover_net-master\hover_net-master\src\misc\viz_utils.py", line 56, in visualize_instances contours = cv2.findContours(inst_map_crop, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) *TypeError: Layout of the output array image is incompatible with cv::Mat (step[ndims-1] != elemsize or step[1] != elemsizenchannels)**

zerodohero commented 4 years ago

No errors were reported when process.py was run

haiqinzhong commented 4 years ago

@zerodohero Hi , I run process.py and report a error by : Traceback (most recent call last): File "process.py", line 97, in overlaid_output = visualize_instances(pred_inst, img) File "/home/hqzhong/Programs/Hover-net/src/misc/viz_utils.py", line 58, in visualize_instances cv2.drawContours(inst_canvas_crop, contours[1], -1, inst_color, 2) cv2.error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/drawing.cpp:2509: error: (-215:Assertion failed) npoints > 0 in function 'drawContours' Haven't you met ? I change nothing except some path . Do you konw why ?

zerodohero commented 4 years ago

Are you sure you're on the right track?

zerodohero commented 4 years ago

I remember that the author mentioned in a question that the version of OpenCV should be 3.2.x, and mine is 3.2.0.6 @haiqinzhong

haiqinzhong commented 4 years ago

but now the version of OpenCV should be 4.1.2.30 requirements.txt look , this is the latest requirements.

vqdang commented 4 years ago

@haiqinzhong check my reply on #61 for this, I will amend the version in requirement.text . It should be 3.2.0.6 . On the grand scheme 4.x.x.x does not effect anything else aside breaking the findContour output format, so if you fix it, you can stay on 4.x.x.x .

haiqinzhong commented 4 years ago

@vqdang yes,I have look , and I'll try again . thank you for your reply

vqdang commented 4 years ago

@zerodohero It seems you may done sthg that changed the output protocol of process.py. Below is the full version using the GT as sample .mat.

import cv2
from viz_utils import visualize_instances
import numpy as np
import matplotlib.pyplot as plt
import scipy.io as sio

img = cv2.imread('CoNSeP/Test/Images/test_1.png')

type_color_dict = {
  0 : (0, 0, 0),
  1 : (255, 0, 0),
  2 : (0, 255, 0),
  3 : (0, 0, 255),
  4 : (255, 0, 255),
  5 : (125, 0, 255),
  6 : (125, 0, 125)
}
pred_data = sio.loadmat('CoNSeP/Test/Labels/test_1.mat')
pred_type_list = np.squeeze(pred_data['inst_type'])
pred_inst_map = pred_data['inst_map']
nuc_color = [type_color_dict[int(v)] for v in pred_type_list]
overlaid = visualize_instances(pred_inst_map, img, nuc_color)

plt.imshow(overlaid)
plt.show()

The format for pred_data['inst_map'] should be the same between the GT .mat and the prediction .mat . And you would see sthg like this from above code.

zerodohero commented 4 years ago
   I solved this problem by adding  
      pred_inst_map = remap_label(pred_inst_map, BY_size =True) 
    to the original code. as follow. but why?

    pred_inst_map = pred_data['inst_map']
    #print(pred_type_list)

   pred_inst_map = remap_label(pred_inst_map, by_size=True)

    print(Counter(pred_type_list))####Counter({4: 558, 3: 218, 2: 90, 0: 1})
    nuc_color = [type_color_dict[v] for v in pred_type_list]
    overlaid = visualize_instances(pred_inst_map, img, nuc_color)
vqdang commented 4 years ago

I have no obvious clue, the source code doesn't depend on the ordering as far as I know. Anyways, I will close this issue as your plotting request has been solved.

Dogdog00 commented 3 years ago

hello. The first problem is that in the type mapping, the same type of cells will be connected, as shown in the figure below image 您好,看打您写的代码。想问您,我是在自己的数据集上训练,最后的overlay想改成其他的颜色去勾画,这个代码怎么实现?谢谢您!