We run into a problem when the original, high res data is not divisible by 2 and in a workflow, segmentation of initial objects (organoids) is performed at level 2. If the user then tries to segment single cells within organoids at level 0 (=> needs to load the level 2 organoid segmentation maps & upsample them to level 0 for masking), it fails here:
ValueError: Cannot convert highres_region=(slice(0, 227, None), slice(0, 580, None), slice(5540, 6524, None)), given lowres_shape=(227, 3885, 3585) and highres_shape=(227, 15543, 14342). Incommensurable sizes highres_size=15543 and lowres_size=3885.
The high-res image has a shape of 15543x14342. This happened because the original image was a search-first dataset: Images were put into the OME-Zarr grid based on their microscope stage coordinates, not as a grid.
We run into a problem when the original, high res data is not divisible by 2 and in a workflow, segmentation of initial objects (organoids) is performed at level 2. If the user then tries to segment single cells within organoids at level 0 (=> needs to load the level 2 organoid segmentation maps & upsample them to level 0 for masking), it fails here:
https://github.com/fractal-analytics-platform/fractal-tasks-core/blob/9ffc3c68152c7ceeb87f3b2afbc5d5aa0f6a0ddb/fractal_tasks_core/upscale_array.py#L85
And we get an error like:
The high-res image has a shape of 15543x14342. This happened because the original image was a search-first dataset: Images were put into the OME-Zarr grid based on their microscope stage coordinates, not as a grid.