aestrivex / ielu

grids and strips electrode localization utility
GNU General Public License v3.0
10 stars 7 forks source link

Manual steps instead of pipeline? #8

Closed kingjr closed 8 years ago

kingjr commented 8 years ago

Hi @aestrivex, cc @choldgraf

Sorry if this is not the best place to discuss this.

I am trying to find a robust solution to detect the location of intracranial electrodes from MRI and not CT. As you know, this kind of detection is much more difficult than with CT because it relies on a noisy MRI dropout.

From what I see, your current API makes use of an automatic electrode localization.

Would it be conceivable to build a GUI with an MRI viewer so that :

This process could also make use of small tricks: e.g plot both the coregistered pre-op and post-op MRI, as well as their differences so as to highlight the electrodes locations, incorporate the location of the cortical surfaces in the fitting to bias the prediction towards particular locations etc.

There already are some interactive MRI viewers here and there, so such enhancement would mainly require work for the online fitting and prediction of the electrodes given specific dimensions and partial locations.

Of course, I'd be happy to help, but I'll need a bit of direction to interact with your code.

aestrivex commented 8 years ago

That approach is rather different from what I've done so far. Our algorithm tries to extract all the electrode locations from a CT along with possibly some noise. Then it will try to fit them to a plane with local curvature. It doesn't need to find all the electrodes this way, but it expects to find most of them (at least 75%).

I've never worked with postoperative MR scans and I'm not sure how hard it is or if the same approach would work. Maybe you could provide some for testing.

Certainly it's feasible and useful to build support for this as a separate module than processing CTs. parts of gselu/ielu will be the same, the labeling, visualization and editing would not change. On Feb 1, 2016 10:19 AM, "Jean-Rémi KING" notifications@github.com wrote:

Hi @aestrivex https://github.com/aestrivex, cc @choldgraf https://github.com/choldgraf

Sorry if this is not the best place to discuss this

I am trying to find a robust solution https://githubcom/mne-tools/mne-python/issues/2829#issuecomment-178001549 to detect the location of intracranial electrodes from MRI and not CT As you know, this kind of detection is much more difficult than with CT because it relies on a noisy MRI dropout

From what I see, your current API makes use of an automatic electrode localization

Would it be conceivable to build a GUI with an MRI viewer so that :

  • the user indicates he/she'll add a new grid n x m electrodes and indicate the fixed distance between neighboring electrodes
  • the user then manually click on an 3-view MRI on the location of some of the electrodes that are clearly identifiable
  • an algorithm fits a linear (for rigid depth electrodes) or a polynomial (for surface ECoG) functions, and predicts the location of the remaining electrodes
  • the users confirm or adjust all electrodes locations ?

This process could also make use of small tricks: eg plot both the coregistered pre-op and post-op MRI, as well as their differences so as to highlight the electrodes locations, incorporate the location of the cortical surfaces in the fitting to bias the prediction towards particular locations etc

There already are some interactive MRI viewers here http://nipy/nibabel#251 and there https://githubcom/AlexandreAbraham/pynax, so such enhancement would mainly require work for the online fitting and prediction of the electrodes given specific dimensions and partial locations

Of course, I'd be happy to help, but I'll need a bit of direction to interact with your code

— Reply to this email directly or view it on GitHub https://github.com/aestrivex/gselu/issues/8.

kingjr commented 8 years ago

Ok, great; I'll try to get an example MRI soon.

it will try to fit them to a plane with local curvature. It doesn't need to find all the electrodes this way, but it expects to find most of them (at least 75%).

Does this handle heavy curvature? E.g grid wrapped around fronto-polar cortices?

aestrivex commented 8 years ago

It should work with heavy curvature. In the cases we have tested there were some cases of very heavy curvature and generally it performed well. Often the primary problem with those cases was not the curvature of the grids, but rather the registration error between the CT and the MR, deformation notwithstanding.

In cases of high curvature, and in general, the idea is that the algorithm might not get it exactly right, but it will get close enough to the true grid structure that it becomes very easy for the user to make a few corrections.

On Mon, Feb 1, 2016 at 11:04 AM, Jean-Rémi KING notifications@github.com wrote:

Ok, great; I'll try to get an example MRI soon.

it will try to fit them to a plane with local curvature. It doesn't need to find all the electrodes this way, but it expects to find most of them (at least 75%).

Does this handle heavy curvature? E.g grid wrapped around fronto-polar cortices?

— Reply to this email directly or view it on GitHub https://github.com/aestrivex/gselu/issues/8#issuecomment-178045032.

kingjr commented 8 years ago

Often the primary problem with those cases was not the curvature of the grids, but rather the registration error between the CT and the MR, deformation notwithstanding.

Ok, this pleads in favour of incorporating an MR manual localizer then.

Also, I got an example of a patient pre and post op MRI, together with the electrode location manually input by the technician.

I'm reluctant to share it on Github at this stage, but I can send it to you privately.

aestrivex commented 8 years ago

You can send it to rlaplant@nmr.mgh.harvard.edu over Martinos Filedrop https://gate.nmr.mgh.harvard.edu/filedrop2/

On Mon, Feb 1, 2016 at 4:26 PM, Jean-Rémi KING notifications@github.com wrote:

Often the primary problem with those cases was not the curvature of the grids, but rather the registration error between the CT and the MR, deformation notwithstanding.

Ok, this pleads in favour of incorporating an MR manual localizer then.

Also, I got an example of a patient pre and post op MRI, together with the electrode location manually input by the technician.

I'm reluctant to share it on Github at this stage, but I can send it to you privately.

— Reply to this email directly or view it on GitHub https://github.com/aestrivex/gselu/issues/8#issuecomment-178201547.

kingjr commented 8 years ago

I started off clean to solve my MR specific issue in a dedicated rep. I kept the viewing, interaction and fitting functions separated, and used the scikit-learn inspired design for the fit and predict, so it should be possible to eventually combine these reps.

aestrivex commented 8 years ago

Sounds good.

I looked at the data you sent. It would work in gselu if you input the electrodes by hand.

I had no idea of how to do the initial extraction and didn't have a lot of time to try to work on it so I'm curious to see what you come up with. On Feb 13, 2016 12:43 AM, "Jean-Rémi KING" notifications@github.com wrote:

I started off clean to solve my MR specific issue in a dedicated rep https://github.com/kingjr/ecoggui. I kept the viewing, interaction and fitting functions separated, and used the scikit-learn inspired design for the fit and predict, so it should be possible to eventually combine these reps.

— Reply to this email directly or view it on GitHub https://github.com/aestrivex/gselu/issues/8#issuecomment-183599228.

kingjr commented 8 years ago

I looked at the data you sent. It would work in gselu if you input the electrodes by hand.

But I guess that's the all challenge no?

kingjr commented 8 years ago

I had no idea of how to do the initial extraction and didn't have a lot of time to try to work on it so I'm curious to see what you come up with.

There are potential ways of getting there via a cluster comparison of the pre and post op MRI, but I tried playing a bit and the data is way too dirty. I think the interactive interface is the way to go.

aestrivex commented 8 years ago

Yes it would be nice to eventually solve this problem but I agree.

I see that you use a different localizer in your repo (it looks like the one from nilearn?) Gselu has a tool like this which I built with Chaco and traitsui which you could in principle use for all the input.

But it needs some UI work. I based it loosely on the nilearn tool but I decided that I needed to have more specific interaction and GUI work so I wanted to build it with a toolkit I knew a bit better. Also I built it for twiddling a few electrodes here and there, rather than inputting many electrodes.

I also wanted it to do some difficult and fancy things like overlay two partly transparent images and adjust the opacity but I haven't gotten around to that.

It might be easier to start from there, I'm not sure. It depends on what you're comfortable with.

I had no idea of how to do the initial extraction and didn't have a lot of time to try to work on it so I'm curious to see what you come up with.

There are potential ways of getting there via a cluster comparison of the pre and post op MRI, but I tried playing a bit and the data is way too dirty. I think the interactive interface is the way to go.

— Reply to this email directly or view it on GitHub https://github.com/aestrivex/gselu/issues/8#issuecomment-183600290.