vqdang / hover_net

Simultaneous Nuclear Instance Segmentation and Classification in H&E Histology Images.
MIT License
504 stars 218 forks source link

Inference on trained model #209

Open LeoMira-1999 opened 2 years ago

LeoMira-1999 commented 2 years ago

Hi,

I've gotten my model trained, however, i'm unable to run inference on it.

The original data I have is laid out as such:

-- train/ -- images/ --- img0.tif --- img1.tif ... -- masks/ --- img0_mask0.tif --- img0_mask1.tif --- img1_mask0.tif --- img1_mask1.tif --- img1_mask2.tif ...

This layout is used in a train and test (valid) folder. The aim is a segmentation aim with no classification.

When i look at the data generated by run_train.py i see it created two folders in logs:

-- logs/ --- 00/ ---- events.out.tfevents ---- net_epoch=1.tar ... ---- stats.json

--- 01/ ---- events.out.tfevents ---- net_epoch=1.tar ... ---- stats.json

When i want to run run_infer.py i don't know what to select as input dir or model paths and looking at the notebook usage too. I don't understand where to start from for the inference wsi or tile and how to get it running.

If someone can help me on that one it would be amazing,

Thank you

Best,

Leonardo

simongraham commented 2 years ago

Hi,

We have provided two scripts (run_tile.sh and run_wsi.sh) that can be used to guide you how to perform inference. We have also provided detailed instructions on the README page. Please go through that and let us know in particular what you are struggling with.

VolodymyrChapman commented 1 year ago

Hi @LeoMira-1999 - the trained models are the .tar files with the epoch at which they were generated in their title. The stats.json file shows the evaluation metrics at each epoch so you can use this to decide which of the trained model .tar files to keep for use in inference. As @simongraham pointed out in the above comment, you can use the run_tile.sh / run_wsi.sh files as scaffolds to make an inference script - the README goes into more detail about what each of the arguments are in inference. For inference, you would select one model (.tar file) and use its path for the --model_path argument in inference. The input_dir is the location of the images you want to conduct inference on. The wsi / tile methods are a matter of choice and resources - if you have whole slide images (+ good hardware with >48GB RAM (in my experience) and a good GPU ), you can use the wsi option to run inference on tissue in a whole slide image. If you want to infer on smaller .tif patches, use the tile option. Does this make sense?