Closed qAp closed 2 years ago
https://github.com/qAp/sartorius_cell_instance_segmentation_kaggle/issues/2#issuecomment-992278340
It seems that the only thing that makes sense to pre-generate is the semantic segmentation, and everything else needs to be computed after the semantic segmentation has gone through whatever transforms are applied during data augmentation.
e.g. The distance transform and the discrete watershed energy can change for a point in the image after a perspective transform, or, the uvec can change after a rotational transform.
The only thing that seems to be preserved for a point in the image is the semantic segmentation. A point that belongs to cell nucleus stays that way regardless of the transforms applied, similarly for a point belonging to cell wall.
Whole-image maps of:
are built up by computing them for individual cells, pasting them into the image frame, in order of descending of cell area.
There would be fewer computations if the whole-image uvec map was computed directly from the whole-map dtfm, but the resulting uvec map doesn't appear to mark out the cell boundaries well.
In addition, for rotational transforms during data augmentation, the uvec needs to be computed from the distance transform after the distance transform has been rotated. Its components cannot be simply treated like scalar maps that can be rotated the usual way using albumentations.
Because of these, it seems there are 2 options: