Open Buglakova opened 5 months ago
According to Lorenzo, this was in fact a feature since the beginning of PlantSeg. It has been improved in #249 in response to this issue, and I think it will be merged into master tomorrow.
This recent improvement in the documentation is a step in the right direction, though it does not completely resolve the issue.
To get the correct segmentation it's important that the objects on the new image are approximately the same size as in the training data. This doesn't necessarily mean that the voxel size should be the same. For example, if I apply the same
nuclei channel of prediction
nuclei channel of prediction for a rescaled image
![Screenshot from 2024-01-29 12-24-23](https://github.com/hci-unihd/plant-seg/assets/89460016/2d9490e0-4fe0-460c-8ad8-584cba05b285)
generic_plant_nuclei_3D
to an image scaled differently, I can get results as different as the following: rawSuggestion: add a function similar to Cellpose's average diameter. Make a look up table for each available model with an average size of the segmented objects, then suggest the user to estimate the expected size in their data and get the rescale factors from this. For example, a model was trained on the ground truth where average nuclei size in pixels was (50, 50, 50). In user's data the size is (10, 100, 100) (estimated by just counting pixels in the image). Then suggest rescaling with factors (5, 0.5, 0.5).
The way that it is now it's not obvious that correct scaling is important, so a non-expert user can just decide that none of the networks work for them.