Closed CheLamVien closed 1 day ago
you can normalize first and then loop:
from cellpose import transforms, utils
# normalize in 3d
imgs_norm = transforms.normalize99(imgs, downsample=True)
for i in range(len(imgs)):
masks[i] = model.eval(imgs[i], diameter=diameter[i],...)
masks_stitched = utils.stitch3D(masks, stitch_threshold=0.5)
training in the GUI, as you've inferred, always normalizes each slice. alternatively you can train in the API and normalize all the slices together, and then turn off normalization in train.train_seg
Dear all,
I wanted to make a prediction on the 3D stack, and I found that the
stitching
method works better thando_3D
. However, I would like to use a differentdiameter
for different 2D slices. Is there any option to perform that?A workaround could be to predict 2D images one by one with specified
diameter
and then stitch them. How can I do it usingCellpose
?At the top of the 3D stack, where the fluorescent intensity is dimmer, fewer pixels are expected to be detected as a mask. I re-trained the "cyto3" model using GUI, but it seems to me that the model doesn't take the intensity values into consideration. I guess that it's because of normalization. To make a prediction, I have tried:
normalize=False
gave a poor results.I found that reducing the
diameter
could solve my problem. Thank you so much for your time. I would greatly appreciate any feedback on improving the model.Best regards. Vien Che