Closed aa2782 closed 10 months ago
What happens when you run this example?
This is how i ran the code. I had a raw ANTs image. After running, I got this result:
I got this dictionary with a lot of key/value pairs similar to above image.
Then, I wanted to see if any of the values have non-zero voxels in them, which would mean that the algorith did the segmentation.
I used nibabel to do the voxel count like this and keep getting 0 like this:
In the antspynet documentation, it says after running the code i'll get a list consisting of the segmentation image and probability images for each label and foreground.
I was expecting I'd be able to plot the segmentation image for each subregion right away using ants.plot but when I do, its just a black figure.
I'm pretty new to antspynet, so if there is anything i need to install or if there is a special way to view the segmented images using the results i got, please let me know. Also, I ran this both on Google Colab and local jupyter notebook in linux.
I don't see where you ran the code exactly as it is shown in the example. Did you actually run the example?
Would it be easier for you to analyze if I sent you my code and the raw ants image?
To answer your question, yes i ran the code in google colab. Infact I just did again and the result is still the same.
I just realized where the problem was. I had reoriented my ANTs file when I loaded it. When I removed the orientation parameter, the deep_flash function worked just fine. Thank you so much for your time!!
Great. But given that it was an orientation issue and that I see you changed the default do_preprocessing = True
to do_preprocessing = False
in your run above, my guess is that you never ran the example like I had requested. In the future, my suggestions would be to:
1) Run the examples exactly as written. In this case, it's a simple cut-and-paste.
2) In general, always do an initial run using the default parameters. In this case, I strongly recommend against using do_preprocessing=False
.
I was trying to implement the 'deep_flash' utility from the ANTspynet to segment the Medial Temporal Lobe(MTL) from my T1 MRI images. I ran it using both raw and skull-stripped images but the issue remains. For some reason, I am not getting the segmented images correctly. After running the code, I got a dictionary with a bunch of probability_images and one segmented_image, which all contain zeros. I am not sure what steps I could perform to get my segmented regions correctly.
If there is a particular way to obtain the segmented images using the results that I got, please let me know.