smistad / FAST

A framework for high-performance medical image processing, neural network inference and visualization
https://fast.eriksmistad.no
BSD 2-Clause "Simplified" License
433 stars 101 forks source link

python example-Stitching patches run failed #179

Closed Talentsome-6 closed 1 year ago

Talentsome-6 commented 1 year ago

Describe the bug I'm trying python whole-slide processing example stitching patches,but run failed. line-> generator = fast.PatchGenerator(256, 256, level=0, overlap=0.1) raise fast.py line 6611 AttributeError("No constructor defined") i have no idea what happen

System:

Screenshots image

andreped commented 1 year ago

It should be: fast.PatchGenerator.create(...), and add arguments inside create() instead.

Here is a simple example of how to generate patches.

Talentsome-6 commented 1 year ago

and if I change the code like this image raise error like this image

smistad commented 1 year ago

You are trying to send a color patch (3 channels) into a network which expects a grayscale image (1 channel). "jugular_vein_segmentation.onnx" is a segmentation model for ultrasound images which are grayscale.

Talentsome-6 commented 1 year ago

You are trying to send a color patch (3 channels) into a network which expects a grayscale image (1 channel). "jugular_vein_segmentation.onnx" is a segmentation model for ultrasound images which are grayscale.

ok, thank you for your clear reply. I know what happened. Is there any solution to show a progressbar when processing WSI with onnx I have tried my model trained by yolo before, but it takes hours and finally I can't stand to wait show simplewindow2D image I dont know if my model is working

Talentsome-6 commented 1 year ago

You are trying to send a color patch (3 channels) into a network which expects a grayscale image (1 channel). "jugular_vein_segmentation.onnx" is a segmentation model for ultrasound images which are grayscale.

ok, thank you for your clear reply. I know what happened. Is there any solution to show a progressbar when processing WSI with onnx I have tried my model trained by yolo before, but it takes hours and finally I can't stand to wait show simplewindow2D image I dont know if my model is working

I'm a coding rookie. The python documentation didn't give the example of boundingbox network. So I tried to change the code via the other network examples.Would you please help me check my code if I write something wrong image

smistad commented 1 year ago

Sorry to see there were some typos in this tutorial. I will fix that. In the mean time here is a working example using a nuclei segmentation model by @andreped The SimpleWindow should appear immediatly and start to render the results.

import fast

importer = fast.WholeSlideImageImporter\
    .create(fast.Config.getTestDataPath() + "/WSI/A05.svs")

tissueSegmentation = fast.TissueSegmentation.create()\
    .connect(importer)

generator = fast.PatchGenerator.create(256, 256, magnification=20, overlapPercent=0.1)\
    .connect(importer)\
    .connect(1, tissueSegmentation)

# Download a nuclei segmentation model and use it
p = fast.DataHub().download('nuclei-segmentation-model')
segmentation = fast.SegmentationNetwork.create(p.paths[0] + '/high_res_nuclei_unet.onnx', scaleFactor=1./255.)\
    .connect(generator)

stitcher = fast.PatchStitcher.create()\
    .connect(segmentation)

# Display the stitched segmentation results on top of the WSI
renderer = fast.ImagePyramidRenderer.create()\
    .connect(importer)

segmentationRenderer = fast.SegmentationRenderer.create()\
    .connect(stitcher)

fast.SimpleWindow2D.create()\
    .connect(renderer)\
    .connect(segmentationRenderer)\
    .run()

bilde

andreped commented 1 year ago

I'm a coding rookie. The python documentation didn't give the example of boundingbox network. So I tried to change the code via the other network examples.Would you please help me check my code if I write something wrong

I observed that you are using YOLOv5 from your logs. Is that correct? AFAIK, FAST only supports YOLOv3 currently, see here. Is that correct, @smistad?

Here is an FPL demonstrating how to run a TinyYOLOv3 for nuclei detection. An equivalent FPL for the segmentation code example @smistad shared above, can be seen here. An FPL is just an alternative way to run a pipeline, but most importantly, it should contain the necessary information you need to setup up inference in pyFAST.

Note that in contrast to segmentation networks, instead of NeuralNetwork, PatchStitcher, and SegmentationRenderer POs, for object detection you will have to use BoundingBoxNetwork, NonMaximumSuppression, BoundingBoxSetAccumulator, and BoundingBoxRenderer. Also note that you will need to provide anchors, which I believe YOLOv5 uses?

Talentsome-6 commented 1 year ago

Appreciate for your reply. Yeah,I'm using Yolov5,which uses anchors. So sad to hear that FAST only supports Yolov3.Still get shocked by the great work you guys have done. Maybe I should find another approch to run a Yolov5 for object detection:)

andreped commented 1 year ago

We might add YOLOv5 in the future, if there is interest for it. Right now, neither @smistad nor I use object detectors in our work, hence, we have not added support for new architectures.

But for now, if you wish to run your YOLOv5 model and take advantage of what else pyFAST offers, and you are familiar with how to run the model of interest in Python (in whichever framework you are using), you could make a custom Python Process Object.

Here is a simple example of how you could make a custom PO in Python and connect it with the rest of the FAST POs. The custom PO here simply inverts the image. Note that you will need to convert from FAST image to a suitable format (e.g., numpy or PyTorch Tensor), then after running model(x) or model.predict(x), you will have to convert the result back to a FAST tensor and return it. When it is finished, just replace it with the BoundingBoxNetwork PO.

It might be tricky to get the output in the appropriate format, but what if you give this a shot first, and then you can ask, if you have any further questions regarding the implementation? :)

Talentsome-6 commented 1 year ago

We might add YOLOv5 in the future, if there is interest for it. Right now, neither @smistad nor I use object detectors in our work, hence, we have not added support for new architectures.

But for now, if you wish to run your YOLOv5 model and take advantage of what else pyFAST offers, and you are familiar with how to run the model of interest in Python (in whichever framework you are using), you could make a custom Python Process Object.

Here is a simple example of how you could make a custom PO in Python and connect it with the rest of the FAST POs. The custom PO here simply inverts the image. Note that you will need to convert from FAST image to a suitable format (e.g., numpy or PyTorch Tensor), then after running model(x) or model.predict(x), you will have to convert the result back to a FAST tensor and return it. When it is finished, just replace it with the BoundingBoxNetwork PO.

It might be tricky to get the output in the appropriate format, but what if you give this a shot first, and then you can ask, if you have any further questions regarding the implementation? :)

okay,I will give a shot. Thank you again for your patience. Best wishes to your work:)