Closed saskra closed 2 years ago
Ah I see, that's actually caused by the small size of your images in the z-direction. Could you try padding your image with zeros such that it has at least 32 slices? Then in addition you'd have to decrease the patch size in the "TorchModelFilter" to 32. If that doesn't work, try padding it to 64 slices and leave the patch size unchanged.
Hope that works? Otherwise, feel free to send a demo image to me, so I could also try debugging it directly here.
I'll have a look at the issue with the Linux version and let you know once there's a new one uploaded.
Yes, the low z-resolution (one voxel is 2x2x150 nm) has already caused problems with the "old" Cellpose. There, however, one could at least theoretically specify a conversion factor if the resolution is not identical in all three directions. So here I would have to add artificial z-planes?
It doesn't necessarily have to be isotropic, it's rather due to the CNN-based processing that a minimum number of slices is required in z. In the version you tried so far, it tried to process image patches of size 256x256x64, which isn't possible given that there are only 21 slices in your image. Imporant thing here: your image has to be larger or equal to the selected patch size (found in the "TorchModelFilter") and in addition the individual dimensions of the patch size should be evenly divisible by a factor of 2 multiple times (due to the downsampling performed by the CNN). Powers of 2 are thus mostly a good choice. You can simply pad the image with zeros (e.g., using the "Image -> Stacks -> Add Slice" function of Fijij).
For now, it's unfortunately not possible otherwise, but potentially something that we could do automatically in the long term, i.e., automatically padding images in z if they are too small.
Thank you very much, it worked with that!
Unfortunately, the result does not look convincing yet, but maybe I need to change something in the settings or probably even train myself. Does the latter also go via Xpiwit, are there also examples or a tutorial for this?
Oki good to hear that it works now. Did you use 32 slices/patches now or even 64?
It can very well be that it doesn't perform well on unseen data or other model organisms as it was only trained for Arabidopsis so far. You can retrain the network with the Python code in the repository and the relevant script for this would be this one: "https://github.com/stegmaierj/Cellpose3D/blob/main/train_network.py". Also make sure to convert your data in that case appropriately as mentioned in the README.md.
I used 32.
Unfortunately, the training will only work on my server with graphics cards, but it runs on Ubuntu 18.04, on which Xpiwit doesn't seem to work.
You can retrain the network with the Python code in the repository and the relevant script for this would be this one: "https://github.com/stegmaierj/Cellpose3D/blob/main/train_network.py". Also make sure to convert your data in that case appropriately as mentioned in the README.md.
There is no parameter in this script to pass the path to my images, is there?
Hi saskra. Parameters regarding image data can be adjusted in the model file located at "models/UNet3D_cellpose.py". You would need to set up a training, validation and testing csv file, which lists the corresponding image/mask pairs such that the joint path of data_root and the paths given in the csv files should point to each file. There is a helper function in "utils/csv_generator" that can be used to create those files. As already mentioned, please make sure to convert your data as described in the readme. I hope this clarifies the problem. Please let us know if the problem persists or if there are any other issues when training on your own data.
Thank you! But there seem to be more settings necessary in this script, right? But I think that's another topic again and also actually belongs in the other repository. So I'll open a new issue there and close this one.
It works with the sample files provided, but not with my own. Is there perhaps a log file somewhere with a more detailed error message?