Closed 464hee closed 3 months ago
We will investigate this matter further. While we acknowledge the upfront costs, it's worth noting that postprocessing time becomes negligible when conducting inference on large WSIs with numerous 1024px tiles.
The time-consuming operation comes from the code under the get_cell_predictions_withtokens function: ( ,instance_types,) = self.model.calculate_instance_map(predictions, magnification= magnification), which is what every 1024*1024 prediction must go through
So I think the time consumption is still noteworthy on many large 1024*1024 wsi images
Ok, I thought you mean the cell merging. We are going to investigate this, but for this a major rewrite of the HoVerNet postprocessing is necessary.
OK, if this function is accelerated, I believe that the prediction of the whole big picture will achieve even better prediction speeds.
Describe the bug A clear and concise description of what the bug is. Despite the speedup, I tested and found that the post-processing time was 3 times the predicted time for this
To Reproduce Steps to reproduce the behavior:
Expected behavior A clear and concise description of what you expected to happen. I want the post-processing time to be less than the prediction time Screenshots If applicable, add screenshots to help explain your problem.
Additional context Add any other context about the problem here. @FabianHoerst