Closed alexrockhill closed 11 months ago
To flesh out the steps a little bit, I think a reasonable priority order would be:
make a copy of examples/visualization/brain.py
(I'll call it tumor.py
below) in the same folder.
install pynrrd
(in a terminal, conda activate mnedev
and then pip install pynrrd
should do it)
edit tumor.py
so that it imports pynrrd
and uses it to read in a 3Dslicer tumor file (just to make sure the loading works)
figure out how to access the vertices and faces from the loaded nrrd object.
try to use brain._renderer.mesh()
in tumor.py
to add the tumor surface to an existing brain plot (by passing in the vertices and faces). Don't worry if it lands outside the brain, or in the wrong hemisphere or whatever; at this point we just want to make sure we can actually add it to the plot successfully.
switch the example to use an MRI from your lab that actually has the tumor in it, so that when the tumor surface mesh is overlayed on the plot it actually shows up in the right place.
The motivation for this ordering is to frontload the parts of the task that involve working on the python code, and worry about the data wrangling at the end (or maybe @alexrockhill can help out with the MRI defacing, etc).
Final note: before making this public, we'll need to address PHI concerns (make sure the MRI is defaced, etc). We'll also need to figure out what exactly is ok to show and say (e.g., are we allowed to say "here's an MRI from someone with a glioblastoma" --- describing the tumor type is possibly helpful context, but possibly disclosing too much?)
Great points @drammock , I'll run the recon now on the defaced T1 and then we can go from there. Maybe put it in mne/examples/clinical/ though.
...also along the way we'll want to make a public API brain.add_mesh()
or so, to avoid accessing the private ._renderer
attribute in an example. But let's get it working first before worrying about that.
Thank you for all the help. I've done 1, 2, 3. It's printing the array for the tumor volume with the right dimensions. The image background is black so a lot of zeros is that going to be a problem with the how the MRI is converted? I am trying to figure out how to make a 3d structure from the array to check that it looks correct.
Oh I thought it was a surface, that sounds like a volume so you'll have to use marching cubes. There's an example in mne.viz.Brain.add_volume_labels. It's a bit complicated feel free to hop on Discord if you get stuck.
So it turns out there might be a bit of an issue sharing data in this case because of pending IRB approval. With the potential issues with anonymization and PHI, we could consider faking the tumor on a healthy sample dataset. It's a bit more work but I don't think too hard. Basically @adelinefecker , in the example you can load in a healthy MRI, zero it out where the tumor was and then go from there. You'll have to modify the surfaces too which will be a bit trickier but I think you can basically delete any vertices within the tumor boundary... a bit complicated though...
@larsoner I'm hoping you may be able to help troubleshoot. I drew a 3d tumor volume on 3d slicer and saved as nrrd (attached here) tumor.nrrd.zip . @alexrockhill helped me out with the marching cubes line and we verified it worked using the brainstem volume from free surfer. I'm not sure why it's spread out like this. I believe the array going into marching cubes is correct. I'm going to attempt with a larger tumor volume to see if that changes things. Thank you!
import nrrd import numpy as np import matplotlib.pyplot as plt import mne from mne.surface import _marching_cubes
path = '/users/adelinefecker/work/tumor.nrrd' data, header = nrrd.read(path, index_order='F') verts, faces = _marching_cubes(data, [1], smooth=0)[0] fig = mne.viz.create_3d_figure((600, 600), scene=False) fig.mesh(*verts.T, faces, 'red') fig.show()
@alexrockhill i tried the same code with a larger tumor volume. bigtumor.nrrd.zip I wondered if there could be an issue if the volume is not 100% filled in/solid. Like one black pixel inside the volume is throwing it off. I was extra careful this time to make sure the tumor was filled in.
Same result...
Having a hole shouldn't matter, remember the brainstem had the fourth ventricle and it worked fine. I really have no idea why it's not working. You could try another marching cubes algorithm e.g. https://scikit-image.org/docs/stable/auto_examples/edges/plot_marching_cubes.html and see if it works. If so, it's definitely a bug in the MNE code and we'll have to look into that...
@alexrockhill that worked! I simply replaced
verts, faces = _marching_cubes(data, [1], smooth=0)[0]
with
verts, faces, normals, values = measure.marching_cubes(data, 0)
now time to add it to the existing brain plot
Huh when I convert it to an image it looks off
import numpy as np
import nibabel as nib
import nrrd
data, header = nrrd.read('tumor.nrrd', index_order='F')
img = nib.MGHImage(data.astype(np.float32), np.eye(4))
nib.save(img, 'tumor.mgz')
That's great that it works above, must be a bug in the mne marching cubes implementation.
Well this looks reasonable though
from nilearn.plotting import plot_anat
import numpy as np
import nibabel as nib
import nrrd
data, header = nrrd.read('tumor.nrrd', index_order='F')
img = nib.MGHImage(data.astype(np.float32), np.eye(4))
plot_anat(img, (371, 322, 118))
Not sure what's causing the marching cubes issues....
@alexrockhill Glad you are able to see it! I got the tumor on the same plot of the brain using brain._renderer.mesh() In order to work on aligning it I was going to use an example MRI that slicer provides and draw a made up tumor inside of it. What file does the MRI need to be converted to? Does it need to be pushed to MNE to be used or can it be on my local computer? Thank you for your help!
So the surface should be in voxels and you need to convert it to surface RAS as in here https://mne.tools/dev/auto_tutorials/forward/20_source_alignment.html but the problem is that it's not from the same MRI so I think the first step is to regenerate one using the sample MRI as a base instead of the one that is 520 x 520 x 196. Then all you have to do is apply the vox to surface RAS transform:
T1 = nib.load(op.join(data_path, 'subjects', 'sample', 'mri', 'T1.mgz'))
img = nib.MGHImage(data.astype(np.float32), T1.affine)
verts = mne.transforms.apply_trans(img.header.get_vox2ras_tkr(), verts)
The tkr is freesurfer surface RAS which is slightly different than your standard scanner RAS. You might also have to scale the verts
from m to mm by multiplying by 1000.
@adelinefecker , would you be able to find an openly available MRI with a tumor maybe on OpenNeuro or Open Science Framework and do a video that we could embed in the documentation for how to use Slicer? If you don't end up finishing, I could take over for you if you don't have time but that part especially, I think I wouldn't do nearly as well as you. If you make it an mp4, you can just put it in this issue and then we can reshoot if anything should be changed.
@alexrockhill good idea. I'll work on that this afternoon!
This looks like it's paywalled but maybe you have access? https://ieee-dataport.org/data-formats/nifti-images.
Hey maybe this one http://nist.mni.mcgill.ca/bite-brain-images-of-tumors-for-evaluation-database/?
here is the 3D slicer example MRI (nii.gz) and tumor volume (nrrd) for your reference @alexrockhill I can work with caleb to do the free surfer reconstruction on this example MRI and take us one step closer to getting them aligned. Video tutorial is coming... Thank you! tumor_example.nrrd.zip MRI_tumor_example.nii.gz.zip
Awesome, did you have a chance to do a video? Where is the mri from?
@alexrockhill the example is provided by 3D slicer when you download https://www.slicer.org/wiki/SampleData Tutorial file won't upload it's too large so i'll email it.
Can you try and make the video like 2 minutes or so and then convert it to mp4 so the file size is reasonable?
@alexrockhill I'll have to shorten it
I'm looking forward to trying this and following your video tutorial to do it myself, have you had a chance to get this finished @adelinefecker?
uploaded privately to youtube because even with low quality the file was too large. https://www.youtube.com/watch?v=aZthPcASknI @alexrockhill
Nice! Looks really cool, I thought 3D slicer had more tools though, that still looked very manually intensive because the threshold wasn't seeded to around the tumor. Is there a way to do that? That seems like a huge time saver. Very cool tutorial though, that will go really well with the MNE tutorial. One comment is that where you cut the video was pretty obvious so if you reshoot you might want to try and make the transitions smoother.
I think this can be closed for https://github.com/mne-tools/mne-gui-addons/pull/8
The goal is to be able to visualize tumor surfaces with the pial surface and then later to add it to the
ieeg_locate
user interface in order accurately place intracranial microgrids which depend on an image of the surface of the brain.We need to:
example/visualization/brain.py
maybe) of how to import thenrrd
file, get the faces and vertices and add to thebrain
objectThis seems like a great addition and doable project in a week. cc @adelinefecker (I made the issue this time but in the future, it would be nice if you could make something like this describing what you're going to do so that you can get feedback and hopefully waste less time and effort).