Open Luchixiang opened 1 year ago
Hi! Thanks for your interest and your question!
You can find model checkpoints on s3://janelia-cosem-networks. They are just tensorflow models (unfortunately an old version though) and there's also some scripts. If you just wanna predict individual blocks you can look at the inference
mode in the unet_template.py
files.
Personally, I use this code for efficient inference on volumes: https://github.com/saalfeldlab/simpleference. You can see how I use that in the inference_config.py
files on s3.
For ER and mito you'd probably wanna start with setup03
if your data is close to 4nm isotropic or setup04
if your data is close to 8nm isotropic. Those are what we call the "many" networks in the manuscript.
When I talk about evaluation in the instructions I don't mean inference/prediction, but the metrics we computed for comparing to manual annotations on small blocks.
Hope this gets you started!
Hi! Thank you for your answer. I follow your instruction and use https://github.com/saalfeldlab/simpleference to test on part of the mouse liver dataset on openorganelle, but got unstatisfactory results.
I firstly convert the image stacks to n5 using z5py and use the command line in simpleference/example to do the inference(Modify the path and shape).
What's more, during the testing, I got a error from line66, simpleference/gunpowder/tensorflowbackend. ( assert output.ndim == 4). My output ndim is 3(the shape is 68 x 68 x 68, same as output shape). I simply expand_dims in the first dimension to solve this problem. (Don't know whether this will affect the final results and don't know why this problem will occur).
Unfortunately I get an eror when trying to open your image but what you might be seeing is that the mouse liver dataset is a hard lift for the pretrained models because it's not well represented by the training data that was used. For the segmentations that are up on openorganelle we started collecting more training data.
Regarding the channel issue, I realize now that I gave you the link to the main repo, but I use my own fork https://github.com/neptunes5thmoon/simpleference where I made some edits to use the n5 backend for zarr instead of z5py and be more flexible with output channels (the model should be outputting one channel for each of the 14 classes it predicts). It's not well documented though. Sorry about that! Maybe try working with this script from the repo https://github.com/saalfeldlab/CNNectome/blob/master/CNNectome/inference/unet_inference.py
If you wanna work with pytorch instead, check out this repo to transfer the tensorflow weights to pytorch: https://github.com/pattonw/cnnectome.conversion
Thank you! I'll follow the instruction and try it again.
Hi! Thank you for your awesome code. I have a question: Suppose I have a set of volume data. How can I use the pretrained model to segment the ER and mito. I'm a little bit confused about the evaluation instruction.