stegmaierj / XPIWITPipelines

Collection of XPIWIT Pipelines
Apache License 2.0
2 stars 0 forks source link

Error while executing pipeline for input set .0 #1

Closed saskra closed 2 years ago

saskra commented 2 years ago

It works with the sample files provided, but not with my own. Is there perhaps a log file somewhere with a more detailed error message?

C:/PycharmProjects/XPIWITPipelines-main/output/item_0007_GradientVectorFlowTrackingImageFilter/WT_M1_1_21_img_GradientVectorFlowTrackingImageFilter_Out1.tif does not exist yet.
Executing Process Object: IMAGEREADER...
Updating reader took: 0.014
Updating intensity window filter took: 0.005
Updating image wrapper took: 0
+ Execution of IMAGEREADER finished.
    - Duration: 0.02 seconds.
    + Settings:
        - WriteResult: 0
        - WriteMetaData: 1
        - MaxThreads: -1
        - Compression: 1
        - Precision: 16
        - UseSeriesReader: 0
        - SeriesMinIndex: 0
        - SeriesMaxIndex: 499
        - SeriesIncrement: 1
        - SpacingX: 1
        - SpacingY: 1
        - SpacingZ: 1
        - InputMinimumValue: 0
        - InputMaximumValue: 255
-----------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
Initialize Process Object: RESCALEINTENSITYIMAGEFILTER...
-----------------------------------------------------------------------------------------------
Executing Process Object: RESCALEINTENSITYIMAGEFILTER...
+ Execution of RESCALEINTENSITYIMAGEFILTER finished.
    - Duration: 0.025 seconds.
    + Settings:
        - WriteResult: 0
        - WriteMetaData: 1
        - MaxThreads: 6
        - Compression: 1
        - Precision: 16
-----------------------------------------------------------------------------------------------
Executing Process Object: TORCHMODEL...
-----------------------------------------------------------------------------------------------
Initialize Process Object: TORCHMODEL...
-----------------------------------------------------------------------------------------------
- Processing image with 6 threads.
- Image Dimension: 3
- Image Size: [480, 480, 21]
- Patch Size: [256, 256, 64]
- Patch Stride: [128, 128, 32]
- Num Steps: [3, 3, 0]
- Processing region with index [0, 0, -43] and size [256, 256, 64]
Error while executing pipeline for input set .0
Error for image: WT_M1_1_21_img.tif
In path: C:/PycharmProjects/XPIWITPipelines-main/Data/WT_M1_1_21_img.tif
-----------------------------------------------------------------------------------------------
--> Pipeline successfully executed in 0.851 seconds.
stegmaierj commented 2 years ago

Ah I see, that's actually caused by the small size of your images in the z-direction. Could you try padding your image with zeros such that it has at least 32 slices? Then in addition you'd have to decrease the patch size in the "TorchModelFilter" to 32. If that doesn't work, try padding it to 64 slices and leave the patch size unchanged.

Hope that works? Otherwise, feel free to send a demo image to me, so I could also try debugging it directly here.

I'll have a look at the issue with the Linux version and let you know once there's a new one uploaded.

saskra commented 2 years ago

Yes, the low z-resolution (one voxel is 2x2x150 nm) has already caused problems with the "old" Cellpose. There, however, one could at least theoretically specify a conversion factor if the resolution is not identical in all three directions. So here I would have to add artificial z-planes?

stegmaierj commented 2 years ago

It doesn't necessarily have to be isotropic, it's rather due to the CNN-based processing that a minimum number of slices is required in z. In the version you tried so far, it tried to process image patches of size 256x256x64, which isn't possible given that there are only 21 slices in your image. Imporant thing here: your image has to be larger or equal to the selected patch size (found in the "TorchModelFilter") and in addition the individual dimensions of the patch size should be evenly divisible by a factor of 2 multiple times (due to the downsampling performed by the CNN). Powers of 2 are thus mostly a good choice. You can simply pad the image with zeros (e.g., using the "Image -> Stacks -> Add Slice" function of Fijij).

For now, it's unfortunately not possible otherwise, but potentially something that we could do automatically in the long term, i.e., automatically padding images in z if they are too small.

saskra commented 2 years ago

Thank you very much, it worked with that!

Unfortunately, the result does not look convincing yet, but maybe I need to change something in the settings or probably even train myself. Does the latter also go via Xpiwit, are there also examples or a tutorial for this?

stegmaierj commented 2 years ago

Oki good to hear that it works now. Did you use 32 slices/patches now or even 64?

It can very well be that it doesn't perform well on unseen data or other model organisms as it was only trained for Arabidopsis so far. You can retrain the network with the Python code in the repository and the relevant script for this would be this one: "https://github.com/stegmaierj/Cellpose3D/blob/main/train_network.py". Also make sure to convert your data in that case appropriately as mentioned in the README.md.

saskra commented 2 years ago

I used 32.

Unfortunately, the training will only work on my server with graphics cards, but it runs on Ubuntu 18.04, on which Xpiwit doesn't seem to work.

saskra commented 2 years ago

You can retrain the network with the Python code in the repository and the relevant script for this would be this one: "https://github.com/stegmaierj/Cellpose3D/blob/main/train_network.py". Also make sure to convert your data in that case appropriately as mentioned in the README.md.

There is no parameter in this script to pass the path to my images, is there?

DEschweiler commented 2 years ago

Hi saskra. Parameters regarding image data can be adjusted in the model file located at "models/UNet3D_cellpose.py". You would need to set up a training, validation and testing csv file, which lists the corresponding image/mask pairs such that the joint path of data_root and the paths given in the csv files should point to each file. There is a helper function in "utils/csv_generator" that can be used to create those files. As already mentioned, please make sure to convert your data as described in the readme. I hope this clarifies the problem. Please let us know if the problem persists or if there are any other issues when training on your own data.

saskra commented 2 years ago

Thank you! But there seem to be more settings necessary in this script, right? But I think that's another topic again and also actually belongs in the other repository. So I'll open a new issue there and close this one.