TIO-IKIM / CellViT

CellViT: Vision Transformers for Precise Cell Segmentation and Classification
https://doi.org/10.1016/j.media.2024.103143
Other
189 stars 27 forks source link

Inference speed improved by x100 for postprocessing,But there are still serious time-consuming #20

Closed 464hee closed 3 months ago

464hee commented 10 months ago

Describe the bug A clear and concise description of what the bug is. Despite the speedup, I tested and found that the post-processing time was 3 times the predicted time for this

To Reproduce Steps to reproduce the behavior:

  1. Command
  2. File
  3. Error Traceback reproduce.zip This is my test code for my video The test found that all other conditions are the same, only change whether to use post-processing operation, without post-processing, the time consumed 6s, the use of post-processing time consumed 21.9, the test results lead to a conclusion, if the post-processing can be further speed up, or put in the GPU processing, may also be able to speed up the prediction process a lot!

Expected behavior A clear and concise description of what you expected to happen. I want the post-processing time to be less than the prediction time Screenshots If applicable, add screenshots to help explain your problem.

Additional context Add any other context about the problem here. @FabianHoerst

FabianHoerst commented 10 months ago

We will investigate this matter further. While we acknowledge the upfront costs, it's worth noting that postprocessing time becomes negligible when conducting inference on large WSIs with numerous 1024px tiles.

464hee commented 10 months ago

image The time-consuming operation comes from the code under the get_cell_predictions_withtokens function: ( ,instance_types,) = self.model.calculate_instance_map(predictions, magnification= magnification), which is what every 1024*1024 prediction must go through

464hee commented 10 months ago

So I think the time consumption is still noteworthy on many large 1024*1024 wsi images

FabianHoerst commented 10 months ago

Ok, I thought you mean the cell merging. We are going to investigate this, but for this a major rewrite of the HoVerNet postprocessing is necessary.

464hee commented 10 months ago

OK, if this function is accelerated, I believe that the prediction of the whole big picture will achieve even better prediction speeds.