Open Eddymorphling opened 7 months ago
Ok, made some progress:
nnunetv2==2.2
to fix the above error. I believe the requirements.txt
file installs nnunetv2 v2.3.1 by default.requirements.txt
file to add some missing packages. Here is what worked for me:
nnunetv2>=2.2
torch
opencv-python
loguru
types-python-dateutil
webencodings
ipython
matplotlib-inline
nbconvert
ipywidgets
entrypoints
prompt-toolkit
pygments
anyio
nbformat
websocket-client
ipython-genutils
tomli
wcwidth
@Eddymorphling thank you for trying this.
I don't understand why you had to install nnunet manually because it is specified in the requirements txt file. Also v2.3.1 should be fine (we want higher than 2.2).
What was the complete command you tried that caused the error in your first message?
Also the additionnal dependencies are weird to me. If you said it worked with v2.2 maybe there were new dependencies added in v2.3. Thank you for reporting this i'll try to reproduce.
HI @hermancollin I think it defaults to a installing v2.3 with the requirements file, which leads to an error that I mentioned above when running the CLI command. Something in v2.3 might be different and might need different dependencies. Reverting back to v2.2 helped me run everything smoothly. Happy to test v2.3 if you have an update on it.
Regarding the unmyelinated model that this repo provides. DO you need a what pixel resolution the model was trained? Also do you have an updated version of this model which performs better? I remember you had mentioned something along these lines in our earlier discussion.Thank you.
@Eddymorphling sorry for the delayed response. I am trying to wrap up a manuscript for next week. I'll have more time to help you after - hope this is not too urgent on your side.
Regarding the unmyelinated model that this repo provides. DO you need a what pixel resolution the model was trained? Also do you have an updated version of this model which performs better? I remember you had mentioned something along these lines in our earlier discussion. Thank you.
The model currently uploaded (on axondeepseg/model_seg_unmyelinated_tem) was trained on data from the SickKids Foundation. The exact pixel size is 0.00481 um/px. I don't know how close this is to your data.
Yes, there is a better model. The one that was uploaded on October 12, 2023 is one of 5 models trained on this data. For more information, see https://github.com/axondeepseg/model_seg_unmyelinated_tem/issues/1. I'm going to upload the rest this afternoon so that you can try it and hopefully give us some feedback.
There is an additional model that I am working on with data from Stanford. I expect this one will work even better, but I can't upload it for now because it is still a WIP.
@Eddymorphling I uploaded the full model. Keep us updated! https://github.com/axondeepseg/model_seg_unmyelinated_tem/releases/tag/v1.1.0
Thank you! You are the best!
Hey @hermancollin , downloads went well. But get an error when running it with the CLI. Does the nn_axondeepseg.py
need to be updated to include the folds in the new model. Here is the error:
2024-02-29 16:25:33.563 | INFO | __main__:main:71 - A single model was found: models/model_seg_unmyelinated_sickkids_tem_best. It will be used by default.
2024-02-29 16:25:33.564 | INFO | __main__:main:91 - Running inference on device: cpu
Traceback (most recent call last):
File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 123, in <module>
main()
File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 93, in main
predictor.initialize_from_trained_model_folder(path_model, use_folds=None)
File "/home/ivm/conda/envs/nn-axondeepseg/lib/python3.10/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 96, in initialize_from_trained_model_folder
configuration_manager = plans_manager.get_configuration(configuration_name)
UnboundLocalError: local variable 'configuration_name' referenced before assignment
@hermancollin Sorry to bother you again, could you please help me with the above issue? Thank you.
Hi @Eddymorphling - Armand is working towards a paper deadline that's due in the next few days (I can't recall), so it's likely he'll only be able to revisit this early next week. I'll try and take a look at it ASAP to see if I can reproduce your error and maybe get an idea of how it can be resolved on your end.
Thank you for reaching out! That would be helpful.
@Eddymorphling I just ran the install today and it segmented fine; I think maybe you downloaded the repo before Armand pinned nnunet to version 2.2: https://github.com/axondeepseg/nn-axondeepseg/commit/1c369ff6ab22491e525a24de0df78668064bbb07
Can you verify this? Do a pip freeze
and check the version of nnunetv2
; if it's not version 2.2, then do pip install nnunetv2==2.2
and try the segmentation again.
Scrolling up, I see that you should have already gotten version 2.2 installed: https://github.com/axondeepseg/nn-axondeepseg/issues/8#issuecomment-1964879208, sorry for missing that!
Could you please still do a pip freeze
and post the output here? Here's mine:
Here's the full log of my succesful test using a fresh install of this repo, and also using this image in an input folder:
I download the UM model.
Here's the output segmentation:
I'm going to test with the latest model, https://github.com/axondeepseg/model_seg_unmyelinated_tem/releases/tag/v1.1.0, in case that one wasn't automatically downloaded by the CLI, brb
@mathieuboudreau Thanks for testing this!
I was able to have it segment images using the UM model (v1.0) without any issues. But, it does not work well with the latest model, (Sickkids foundation model, v1.1.0). All I did was download the model manually, unzip it and assign the path to the model in the CLI using --path-model
. That is when I end up with the error above. I am also running on nnunet==2.2
currently. Here is my pip freeze
just in case:
acvl-utils==0.2
batchgenerators==0.25
blessed==1.20.0
certifi==2024.2.2
charset-normalizer==3.3.2
connected-components-3d==3.12.4
contourpy==1.2.0
cycler==0.12.1
dicom2nifti==2.4.10
dynamic-network-architectures==0.3.1
filelock==3.13.1
fonttools==4.49.0
fsspec==2024.2.0
future==1.0.0
graphviz==0.20.1
idna==3.6
imagecodecs==2024.1.1
imageio==2.34.0
Jinja2==3.1.3
joblib==1.3.2
kiwisolver==1.4.5
lazy_loader==0.3
linecache2==1.0.0
loguru==0.7.2
MarkupSafe==2.1.5
matplotlib==3.8.3
mpmath==1.3.0
networkx==3.2.1
nibabel==5.2.1
nnunetv2==2.2
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-ml-py==12.535.133
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
opencv-python==4.9.0.80
packaging==23.2
pandas==2.2.1
pillow==10.2.0
pydicom==2.4.4
pyparsing==3.1.1
python-dateutil==2.8.2
python-gdcm==3.0.23
pytz==2024.1
PyYAML==6.0.1
requests==2.31.0
scikit-image==0.22.0
scikit-learn==1.4.1.post1
scipy==1.12.0
seaborn==0.13.2
SimpleITK==2.3.1
six==1.16.0
sympy==1.12
threadpoolctl==3.3.0
tifffile==2024.2.12
torch==2.2.1
tqdm==4.66.2
traceback2==1.4.0
triton==2.2.0
typing_extensions==4.10.0
tzdata==2024.1
unittest2==1.1.0
urllib3==2.2.1
wcwidth==0.2.13
yacs==0.1.8
@Eddymorphling I found the issue, and a temporary fix. For a more permanent solution, I'd rather wait for @hermancollin.
The problem stems from the fact that in the "folds" directory of the Sickkids foundation model, v1.1.0, the checkpoint filenames are called "checkpoint_best.pth". However, because in our nnunet call,
we don't define a value for the argument "checkpoint_name", i.e. checkpoint_name=checkpoint_best.pth
, nnunet uses the default:
which is checkpoint_name=checkpoint_final.pth
, but that file isn't in the folds folder for this model, which snowballs and later results in an error.
So a quick fix that worked for me was to rename the checkpoint files in each folds folder to "checkpoint_final.pth", and that resolved the issue for me.
Let me know if it works for you!
Ah I missed that vital piece of info. It works now, thank you! This has been helpful.
v1.1 of the unmyelinated model performs much better than v1.0. Do you have some info (like sample type, TEM/SEM etc.) on the training images used for the SickKids model? Armand already had shared the info on the scaling of the input images. From what I understand there is also a "Stanford" model that in WIP which performs even better? Thank you for all your efforts on this!
Hi @Eddymorphling. Happy to hear you were able to make it work. Thanks @mathieuboudreau for looking into this - I'll make a PR to catch this error in the future. In more recent scripts, we have a CLI argument so the user can choose between best or final checkpoints (although I only released the best checkpoints for the SickKids model, to halve the release filesize).
v1.1 of the unmyelinated model performs much better than v1.0. Do you have some info (like sample type, TEM/SEM etc.) on the training images used for the SickKids model? Armand already had shared the info on the scaling of the input images.
The modality for the SickKids model is TEM. The team it was initially developed for studies myelination in mouse models. They had multiple samples per genotype per timepoint, which I think the training data partially covered. What about your images? I know they are TEM as well.
From what I understand there is also a "Stanford" model that in WIP which performs even better? Thank you for all your efforts on this!
Yes, this one is still a WIP. It is also being trained on TEM images but their images look quite different and have a very high resolution. It might perform better on your data, but your mileage may vary.
I would be interested in knowing more about your project. From what I gathered, you are interested in segmenting myelinated + unmyelinated/remyelinated axons as well. If you were willing to collaborate maybe we could help you get better performance by training or fine-tuning the models.
(please note I fixed the problem with best checkpoints + updated the download script for TEM unmyelinated v1.1 in f33f43b)
Hi @hermancollin Sorry to go back to this. I had to recreate my conda env recently so go had to reinstall nn-axondeepseg. I setup everything as mentioned in the main page but end up with the same error like before. I can confirm that git clone has pulled the latest version of all files as in the PR mentioned in f33f43b
/home/ivm/.local/lib/python3.9/site-packages/jupyter_client/__init__.py:23: UserWarning: Could not import submodules
warnings.warn("Could not import submodules")
2024-04-17 10:08:31.531 | INFO | __main__:main:73 - A single model was found: models/model_seg_unmyelinated_sickkids_tem_best. It will be used by default.
2024-04-17 10:08:31.536 | INFO | __main__:main:93 - Running inference on device: cuda:0
Traceback (most recent call last):
File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 130, in <module>
main()
File "/home/ivm/nn-axondeepseg/nn_axondeepseg.py", line 96, in main
predictor.initialize_from_trained_model_folder(
File "/home/ivm/conda/envs/nn-axondeepseg_miniforge/lib/python3.9/site-packages/nnunetv2/inference/predict_from_raw_data.py", line 96, in initialize_from_trained_model_folder
configuration_manager = plans_manager.get_configuration(configuration_name)
UnboundLocalError: local variable 'configuration_name' referenced before assignment
I tried running using this CLI - python nn_axondeepseg.py --seg-type UM --path-out $output_folder --path-dataset $input_folder --use-gpu
EDIT: Here is some additional output from the logs
use_folds is None, attempting to auto detect available folds
found the following folds: []
Hi @Eddymorphling. It seems the script cannot find the model checkpoints. Can you find the model_seg_unmyelinated_sickkids_tem_best
folder and tell me what is inside?
@hermancollin THanks for reaching out! Here is a screenshot of the folder
@Eddymorphling ahhh I think I see the problem. Does it work if you add the --use-best
option? e.g.
python nn_axondeepseg.py --seg-type UM --path-out [...] --path-dataset [...] --use-gpu --use-best
That's an important detail. Thank you for reporting this problem. I'll try to make the script more automated but for now this argument is required if you only have the checkpoint_best.pth
models. Without it, nnunet looks for models named checkpoint_final.pth
.
Ah great, that did it! Thanks again.
Just another small question, predictions are saved in RGB format currently. What should I tweak in nn-axondeepseg.py
to save it as a simple binarized 8bit file?
@Eddymorphling are they? I'm surprised. On my side, the model predicts grayscale masks. so I'm not sure why you get this behavior. In any case, this is the function you would need to modify:
Change L53 for
img = cv2.imread(str(pred), cv2.IMREAD_GRAYSCALE)
I reckon this will be enough/
@Eddymorphling actually you were right. This is now fixed on the latest version.
@hermancollin Hi! Thank you for working on this. I updated my scripts and everything works like a charm now.
I also saw the new Stanford model uploaded and tested its on my images for segmenting unmyelinated axons and works really nice. May I ask what the pixel scaling of the original images of the training data that was used to generate the Stanford TEM model? Just want to make sure that my images are rescaled to match the training dataset.
Hi @Eddymorphling! It has been a while! Since our last exchange, the axondeepseg software was updated and now supports these models (so this current nn-axondeepseg
repository is no longer up to date). I would highly suggest you download the latest version of AxonDeepSeg and try the models there. You will still be able to use the Stanford model, but it will be more stable and you will additionally be able to run morphometrics on the unmyelinated axon masks.
As for the pixel size for the Stanford model, it is 4.93 nm/px isotropic.
Thanks @hermancollin. I did upgrade to the latest version of ADS that includes the new generalist model. TBH it did not work out quiet well for me when segmenting myelinated axons. In the screenshot below, image 2 was segmented using the old TEM model with the parameter -s 0.10
, and image 3 is with the new generalist model. I prefer using the old TEM model but I purged my old ADS env (ADS=v4.1) and am not sure how to install this again. I think having the pixel-scaling in the CLI makes a big difference in the inference for my datasets and I understand that this is no longer needed with the new generalist model.
Hi @hermancollin As per our previous discussion, I am testing nn-axondeepseg. The setup went well, I just had to also install a pip package manually (
wcwidth
). Now when I run the CLI segmentation command , I come across the following errorAny advise on how to fix this? Some other info - my fresh conda env runs on python 3.10, input files are in
.png
format,cuda/pytorch
sees the GPU in my conda environment.