andypotatohy / roast

A simulator for TES
https://www.parralab.org/roast/
GNU General Public License v3.0
42 stars 22 forks source link

Loading Roast Outputs to Brainstorm #25

Open tmedani opened 4 years ago

tmedani commented 4 years ago

Hi @andypotatohy & @MaxNentwich

Just to follow up on our previous discussions about loading the Roast data to Brainstorm.

I will try to summarize the main steps, please complete if I'm missing any points.

Completed

In progress

Next

Combine all these points in one script that can be used as an interface between roast and Brainstorm.

MaxNentwich commented 4 years ago

Hi Takfarinas,

Thanks for this overview! I have some progress on the items above:

However the plot of the LF looks similar to what I have shared yesterday. The arrows are oriented way from the reference in the vicinity, but not like they should be. One thing that stands out is the amplitude, the arrows are very small. Another thing is that they seem to be parallel to the surface. Do you have any idea why that would be the case?

roast2brainstorm.zip

andypotatohy commented 4 years ago

Hi Max,

I would check these items, as we discussed yesterday: (1) electrode coordinates and order (especially the order) from ROAST may not match Brainstorm; (2) ROAST uses a different voltage reference than Brainstorm (this one I have to read up to understand better about Brainstorm); (3) different coordinate systems used by ROAST and Brainstorm; (4) mesh nodes may not exactly match between ROAST and Brainstorm.

Thanks!

On Thu, Sep 10, 2020 at 1:56 PM MaxNentwich notifications@github.com wrote:

Hi Takfarinas,

Thanks for this overview! I have some progress on the items above:

However the plot of the LF looks similar to what I have shared yesterday. The arrows are oriented way from the reference in the vicinity, but not like they should be. One thing that stands out is the amplitude, the arrows are very small. Another thing is that they seem to be parallel to the surface. Do you have any idea why that would be the case?

roast2brainstorm.zip https://github.com/andypotatohy/roast/files/5203775/roast2brainstorm.zip

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/andypotatohy/roast/issues/25#issuecomment-690582432, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHAD2LCMZCIWP7GKJGC4CTTSFEHN3ANCNFSM4RDHGDTQ .

-- Yu (Andy) Huang, Ph.D.

Research Fellow, Radiology Dept., Memorial Sloan Kettering Cancer Center Rm. 321-G17P, 321 East 61st Street, New York, NY 10065 Office: 646-608-7608 Email: huangy7@mskcc.org Zoom: https://us04web.zoom.us/j/3088839337

Research Associate, CCNY-MSK AI Partnership Center for Discovery and Innovation, Rm. 3.320, 85 St Nicholas Terrace, New York, NY 10031 Email: yhuang16@citymail.cuny.edu; andypotatohy@gmail.com Web: http://www.parralab.org/people/yu-andy-huang/

tmedani commented 4 years ago

Hi Takfarinas,

Thanks for this overview! I have some progress on the items above:

However the plot of the LF looks similar to what I have shared yesterday. The arrows are oriented way from the reference in the vicinity, but not like they should be. One thing that stands out is the amplitude, the arrows are very small. Another thing is that they seem to be parallel to the surface. Do you have any idea why that would be the case?

roast2brainstorm.zip

Hi @MaxNentwich , I'm reviewing this function and test it again.

@andypotatohy

Hi Max, I would check these items, as we discussed yesterday: (1) electrode coordinates and order (especially the order) from ROAST may not match Brainstorm; (2) ROAST uses a different voltage reference than Brainstorm (this one I have to read up to understand better about Brainstorm); (3) different coordinate systems used by ROAST and Brainstorm; (4) mesh nodes may not exactly match between ROAST and Brainstorm. Thanks!

I'm not sure that there is any difference at this level between Roast and Brainstorm, we are just ordering the data output from Roast to the right format in order to read it with Brainstorm. right now, Brainstorm is just used as a graphical interface to display the raw Roast Outputs

The errors are somewhere else :

I will check all this and test Max updates.

tmedani commented 4 years ago

hi all,

The first error that I think is that the order of the electrodes is not correct, in the following figure, the selected electrodes from the 3d location that I have extracted (red and green) are in the frontal areas whereas the highlighted LF vectors from the LF Aall are in temporal areas.

image

It seems that the indexes of the electrode in the final leadfield from Roast do not fit the order that we are using from 3d electrodes here https://github.com/andypotatohy/roast/blob/7cd31ff315a45b875d7dc85634243b4266db314b/electrodePlacement.m#L158

So, we come back to the first question, we need to extract the exact 3d locations of the electrodes from Roast, and should be in the same order and location as they are used in the LF, also the reference electrode should be included in its right place.

It seems that another transformation is applied somewhere in the code that change the order of the electrodes? @andypotatohy @MaxNentwich ?
could you check if the position of the final electrode fits the ones that we are using from this line https://github.com/andypotatohy/roast/blob/7cd31ff315a45b875d7dc85634243b4266db314b/electrodePlacement.m#L158

andypotatohy commented 4 years ago

Hi Takfarinas, you're right. I remember we discussed this before. The electrode order in the ROAST leadfield follows the file elec72.loc, which is a different order than the electrode order of 3D electrode coordinates, which follows capInfo.xls. This is due to historical reason when I developed ROAST. See this line https://github.com/andypotatohy/roast/blob/7cd31ff315a45b875d7dc85634243b4266db314b/roast.m#L689 So a simple solution is to just re-order the 3D electrode coordinates following order in elec72.loc. Thanks!

tmedani commented 4 years ago

Hi Takfarinas, you're right. I remember we discussed this before. The electrode order in the ROAST leadfield follows the file elec72.loc, which is a different order than the electrode order of 3D electrode coordinates, which follows capInfo.xls. This is due to historical reason when I developed ROAST. See this line

https://github.com/andypotatohy/roast/blob/7cd31ff315a45b875d7dc85634243b4266db314b/roast.m#L689

So a simple solution is to just re-order the 3D electrode coordinates following order in elec72.loc. Thanks!

thanks for the confirmation @andypotatohy as you said it, this is "this is ugly",

@MaxNentwich could you reorder the 3d electrodes back as in the code pointed by Andy?

tmedani commented 4 years ago

Hi,

just quick update,

I think I found how the electrodes are sorted, but let me say that it was crazy to follow what is going on in the different electrodes ordering!

image

You need to add the following In the roast main function, with this argument

roast([],'leadField','simulationTag','MNI152leadField')

you need to extract the reordering from the line 759

'indInRoastCore' I did this indexThatReorderLF = indInRoastCore indexThatReorderLF(end) = 72; % 72 is the ref elec

Then you need to reorder the previous electrode list following these new indexes

electrode_coord = load('../lf_electrode_coord.mat'); EegLocCorrected2 = electrode_coord (:,indexThatReorderLF);

tmedani commented 4 years ago

@MaxNentwich could you please try to reproduce these results in your side and confirm my results?

MaxNentwich commented 4 years ago

Thanks Takfarinas! I will try to reproduce this and let you know

tmedani commented 4 years ago

Thanks Takfarinas! I will try to reproduce this and let you know

will be great, then we can move to the next step.

MaxNentwich commented 4 years ago

Hi Takfarinas, It looks like I am able to fix the leadfield with your reordering as well. Here is an example:

leadfield_4

However, there seem to be some cases where it doesn't look as clear. For example, when I pick electrodes on opposite hemispheres:

leadfield_6

Maybe this is just an issue with the plot or the scaling. When I zoom in it looks like the leadfield vectors are oriented the same way, but they are very small. Is there a way to plot them at different length in brainstorm? Or is it possible that there is an issue with the units?

Lastly, it looks like that the leadfield vectors are pointing away from the reference. In the brainstorm example they point towards it. The figure below is what I get when I select the average reference for the plot in brainstorm:

leadfield_avg

Please let me know what your thoughts are on this.

tmedani commented 4 years ago

@MaxNentwich if in most case the LF is correct then it's ok, I can't see from these figures the orientation of the vectors, it's possible that in some points there are errros.

ideally, if you specify the reference, then the vectors should point from the ref electrode to the target, it's like the anode and the cathode, the vectors are the direction of the currents sampled at the cortex points.

if you select the avg reference then the vectors are coming from the 'infinity point' to the target electrode.

I will try to add a patch that increases/decreases the size of the vectors.

Is there any function in the roast that can display the electrical field or the current fields on the model?

tmedani commented 4 years ago

hi Max, attached an improved function to display the lead field

remove the precedent version from your brainstorm folder and replace it with this one https://www.dropbox.com/s/bouxad8bcgbgm1d/view_leadfields.m?dl=0

you can find it in the \brainstorm3\toolbox\gui

image

you can use B and L from the keyboard to increase or decrease the size of the vectors

](url)

andypotatohy commented 4 years ago

Is there any function in the roast that can display the electrical field or the current fields on the model?

Yes, you can call the function sliceshow() here It will show you the field vector in the 2D slice from the 3 orthogonal views.

MaxNentwich commented 4 years ago

Hi Takfarinas and Andy, Thanks for the functions to help with visualization!

With this I can now see that for all cases the orientation of the leadfield vectors looks reasonable:

Here, the example from last week: offset_leadfield_electrodes_3

However, the arrows point from the target to the reference electrode, if I understood this right. They point from the red to the green electrode in brainstorm. Do we need to flip them?

Another thing I noticed is that the point where the leadfield vectors seem to converge does not always lie under the electrodes, as in the images below:

offset_leadfield_electrodes offset_leadfield_electrodes_2

Is this expected?

Lastly, sometimes there are leadfield vectors that are much larger than the other ones and point in another direction. You can see some in the first image. Should these be cleaned up?

tmedani commented 4 years ago

Hi

Yes, you can call the function sliceshow() here

Thanks, @andypotatohy looking forward to testing it.

@MaxNentwich great that you were able to check all this.

Now it's clear, we have more than 98% pointing in the right direction, this is what is expected and it's a way to quickly check the results.

Another thing I noticed is that the point where the leadfield vectors seem to converge does not always lie under the electrodes, as in the images below:

This is again weird!! This mean that the list of the electrodes are not all correctely aligned in the channel file. is it the case for all the combinations? or it appears only in same combination?

However, the arrows point from the target to the reference electrode, if I understood this right. They point from the red to the green electrode in Brainstorm. Do we need to flip them?

I think this is a minor problem, I guess that that the stimulation current is <0 in this simulation or the reference electrode is the the input. @andypotatohy is it the case?

Lastly, sometimes there are leadfield vectors that are much larger than the other ones and point in another direction. You can see some in the first image. Should these be cleaned up?

Regarding the weird direction/size of some vectors, these are some singularities with errors, I think they are related to the FEM solver. @andypotatohy did you observed these values in your previous investigations?

We can remove them later by downsamplig to lower mesh densities or by replacing them by interpolating the values of their neighbors.

andypotatohy commented 4 years ago

I think this is a minor problem, I guess that that the stimulation current is <0 in this simulation or the reference electrode is the the input. @andypotatohy is it the case?

In ROAST only electrode Iz is the reference. @MaxNentwich does this (point from red to green) happen to all the bipolar configurations?

@tmedani You're right about the weird size of the vectors. I know some of them are outliers from the FEM solver. This usually happen on boundaries and areas close to electrodes.

MaxNentwich commented 4 years ago

Another thing I noticed is that the point where the leadfield vectors seem to converge does not always lie under the electrodes, as in the images below:

This is again weird!! This mean that the list of the electrodes are not all correctely aligned in the channel file. is it the case for all the combinations? or it appears only in same combination?

It happens to a couple of different combinations. Others look fine. It's always close to the electrode though, never at a completely different location. I am wondering if this is about the exact location of the electrodes? Or can it be about which vertices from the mesh we select from the cortex? Maybe on the scalp the leadfield would match the electrode position?

MaxNentwich commented 4 years ago

In ROAST only electrode Iz is the reference. @MaxNentwich does this (point from red to green) happen to all the bipolar configurations?

Yes, this is consistent with all bipolar configurations, also the ones including Iz.

tmedani commented 4 years ago

It happens to a couple of different combinations. Others look fine. It's always close to the electrode though, never at a completely different location. I am wondering if this is about the exact location of the electrodes? Or can it be about which vertices from the mesh we select from the cortex? Maybe on the scalp the leadfield would match the electrode position?

thanks for checking, I think the 3d view can be biased also, it's depend on which angle you see the figure.

a best verification is to plot the LF with the scalp surface, and also yes, the Roast computation used an electrode model with a lot of node (1cm radius size?) and we are observing only one point.

In ROAST only electrode Iz is the reference. @MaxNentwich does this (point from red to green) happen to all the bipolar configurations?

Yes, this is consistent with all bipolar configurations, also the ones including Iz.

I think it's just related to stimulation current which is <0, is it true? I did not check what is the value of the stimulation current.

andypotatohy commented 4 years ago

In ROAST only electrode Iz is the reference. @MaxNentwich does this (point from red to green) happen to all the bipolar configurations?

Yes, this is consistent with all bipolar configurations, also the ones including Iz.

I think it's just related to stimulation current which is <0, is it true? I did not check what is the value of the stimulation current.

The stimulation current is 1mA. I think the definition of positive direction of current flow may be different in Brainstorm and ROAST. A classic issue in physics.

tmedani commented 4 years ago

In ROAST only electrode Iz is the reference. @MaxNentwich does this (point from red to green) happen to all the bipolar configurations? Yes, this is consistent with all bipolar configurations, also the ones including Iz. I think it's just related to stimulation current which is <0, is it true? I did not check what is the value of the stimulation current.

The stimulation current is 1mA. I think the definition of positive direction of current flow may be different in Brainstorm and ROAST. A classic issue in physics.

yes, I think this is not a big problem, we can just flip the value of the LF matrix,

tmedani commented 4 years ago

Now, wee need to import the MRI, and then set the fudicial points, then rotate/reshape/rescale the LF to the BST coordinate system.

Once all this done, I will build a script that will do all his in one line.

Do you have Stefan Haufe GitHub ID? he has already done similar process, his feedback will be helpfull.

tmedani commented 4 years ago

I will share with a first version of script that load and set the fiducial points asap.

stefanhaufe commented 4 years ago

Hi @tmedani , sorry for my absence. I am following the discussion as closely as I can, but I was overloaded with other work this week. I will look into my old code on Friday. From memory, I thin that the lead fields Andy gave me back then were in native MRI space. So I applied the same transformation to the MRI data and the leadfield. For the Brainstorm branch of the pipeline, I used the SPM transformation into MNI space, which can be called within Brainstorm and is now fixed part of the importing process, I believe. As part of this SPM process, the fiducials are returned and can be used to define the BST coordinate system. I did not use these fiducials for anything else, as I converted all data into a "native space", which was not based on fiducial information. Hope to be able to give some more details on Friday.

tmedani commented 4 years ago

hi @stefanhaufe great that you are here.

No urgency, even with our small available time we are progressing and I think we are too close.

In mean time I will try to make the sciprt that load the MRI and ask to set the scs coordinate system. Then we will have the matrix that transform from the native MRI space to the Brainstom space.

Then we will see what we can do.

A+

andypotatohy commented 4 years ago

Thanks @tmedani. Let me know if you need any input from me. I can only get more time to look into details after I submit the grant on Oct 12.

stefanhaufe commented 4 years ago

Hi @tmedani , I had a look at my old scripts from the NYHead paper. They are very messy and I used a lot of manual transformation which I found by through trial and error. The main reason for this is that I converted everything into a "native space" (I think RAS), which is not identical to the SCS space. Nowadays I would not do this anymore.

I wanted to suggest the following procedure for the conversion. I hope it makes sense, otherwise please let me know if I am missing something. The two premises are that

  1. Every output of Roast including mesh nodes, electrode positions, and the orientation of the lead fields are stored in MRI voxel coordinates.
  2. The original T1 MRI on which Roast was applied is available in any case.

@andypotatohy , can you confirm this?

I would suggest to

  1. First load the MRI into Brainstorm with the regular import process.
  2. Run "Compute MNI transformation", which I think calls an SPM function. Although we do not need the MNI transformation, this process returns the fiducials that we need to define the SCS system. I have used this hundreds of times and think that the fiducials are more accurate than what one could possibly achieve by hand. After this process, we actually have everything that we need stored in the MRI variable, namely the locations of the fiducials, and the transformation between MRI voxel and SCS spaces.
  3. Now all coordinates returned by roast have to be converted from voxel space to SCS space using this transformation, for example by using the cs_convert function. This would be roast_headmodel.node(:,1:3) and electrode_coord.electrode_coord, right?
  4. It seems to me that the e-field is stored as an image the size of the input T1 ( x 3 for the x, y and z current components). I was actually expecting a data structure with one current per FEM node. I guess what we will have to do is assign each node a current through a nearest neighbor scheme. This could be done for convenience already in MRI voxel space. Alternatively, in SCS space. For that, all voxel coordinates of the image could be generated through meshgrid or ndgrid, and converted to SCS as well.
  5. The current components of the lead field (4th dimension of ef_all) need to be transformed as well from MRI to SCS space. Here, only the 3x3 transformation should be used but not the translation.

Does this make sense to you?

There are some open questions such as whether 5. will be sufficient if the MRI has unisotropic resolution. I think it is.

andypotatohy commented 4 years ago

Every output of Roast including mesh nodes, electrode positions, and the orientation of the lead fields are stored in MRI voxel coordinates. The original T1 MRI on which Roast was applied is available in any case. @andypotatohy , can you confirm this?

Yes I can confirm that: (1)ROAST outputs mesh nodes in MRI voxel space; the script @tmedani and @MaxNentwich came up also outputs the elctrode positions; (2) There is no MRI for the New York head, but for individual head the original T1 MRI is available. If the original MRI is not in RAS, it's converted into RAS by ROAST. The lead field and mesh nodes are all in RAS orientation.

stefanhaufe commented 4 years ago

Every output of Roast including mesh nodes, electrode positions, and the orientation of the lead fields are stored in MRI voxel coordinates. The original T1 MRI on which Roast was applied is available in any case. @andypotatohy , can you confirm this?

Yes I can confirm that: (1)ROAST outputs mesh nodes in MRI voxel space; the script @tmedani and @MaxNentwich came up also outputs the elctrode positions; (2) There is no MRI for the New York head, but for individual head the original T1 MRI is available. If the original MRI is not in RAS, it's converted into RAS by ROAST. The lead field and mesh nodes are all in RAS orientation.

Very good. For the New York head, we can use the known transformations MRI <-> MNI and MNI <-> SCS.

tmedani commented 4 years ago

Hi all,

Sorry for the slow reaction,

I'm currently working on the script that does the points that @stefanhaufe exposed. I have the idea of how to fulfilled the point from 1 to 3, however, for point 4, I don't know how/what to do with the electrical field, do you want to load it to brainstorm?

Regarding point 5, we need to think about how to rotate and the eventually rescale of the LF vectors, any suggestions?

Thx.

andypotatohy commented 4 years ago

Regarding point 5, we need to think about how to rotate and the eventually rescale of the LF vectors.

Can we just apply the same transform matrix that's used on the mesh nodes to the LF vectors?

tmedani commented 4 years ago

Hi,

I was able to do a script that loads the MRI and the mesh to brainstorm and aligns all in the same coordinate system (scs). It will be fully automated, it will create the protocol, create a subject, load the data from roast. The only requirement from the user is to specify the roast output folder with the MRI, run it, and then specify the scs coordinates from the GUI.

here is a quick view of the mesh+mri

image

Now, we need to apply the same process for the electrodes and the cortex, and of course for the LF,

@MaxNentwich would you play around it, and try to merge this script with the previous one?

Regarding point 5, we need to think about how to rotate and the eventually rescale of the LF vectors.

Can we just apply the same transform matrix that's used on the mesh nodes to the LF vectors?

I'm not sure, @stefanhaufe ?

A+

stefanhaufe commented 4 years ago

Hi @tmedani , great work, this looks awesome!

About point 4, it was a mistake, of course I did not mean the e-field for a particular problem but the nodes x channels x 3 leadfield. This needs to be properly transformed too. After which the resulting e-field will all automatically be in the right coordinate system.

I suggest to rearrange the leadfield as a nodes*channel x 3 matrix and transform all 3D points vectors similarly to the coordinate vectors. Yes, the same transformation should be used for the leadfield, but with one exception. No translation should be applied. If I am not mistaken only the 3x3 part of the affine transformation should be applied. You could either do this manually or use the cs_convert function but set the translation parameters in the mri structure temporarily to zero.

As I mentioned, there are a few open questions that we should check. Especially, if, in case of non-isotropic MRI, the lead field needs an additional dimension specific scaling (in order to reverse the scaling of included in the 3x3 SCS-MRI transform). I think not. I suggest to use some trials and error strategy to investigate that. Since we luckily have all the data already available in Brainstorm, we can easily compute a standard leadfield (e.g. using OpenMEEG) for some of the Roast FEM mesh points. By comparing the resulting vectors we should be able to easily figure out whether we have the right transformation or not. I suggest to do one such test with a non-isotropic MRI dataset.

andypotatohy commented 4 years ago

I agree that no translation is needed for transform the LF.

Regarding the non-isotropic MRI, I want to mention that ROAST puts coordinates into unit of "mm" using the MRI resolutions stored in the header (see this line). So the scaling factor is already applied for both the mesh nodes and the LF.

MaxNentwich commented 4 years ago

Now, we need to apply the same process for the electrodes and the cortex, and of course for the LF,

@MaxNentwich would you play around it, and try to merge this script with the previous one?

I definitely can test that! Can you please share your code with me once you have it ready?

tmedani commented 4 years ago

I agree that no translation is needed for transform the LF.

Regarding the non-isotropic MRI, I want to mention that ROAST puts coordinates into unit of "mm" using the MRI resolutions stored in the header (see this line). So the scaling factor is already applied for both the mesh nodes and the LF.

Yes, I think also that the final mesh is in the real size, then I'm not sure that the anisotropic resolution will cause problems.

@MaxNentwich here is the script, in order to use you need to have brainstorm since it will use some of its functions. You need also to change the extension from .txt to .m

bst_roast_protocol.txt

You can apply the same transformation to the cortex mesh and to the channels as the one applied to the FEM mesh, and then add them to the database.

Let us know how things are going.

We are almost at the last step.

A+

tmedani commented 3 years ago

Hello all,

@MaxNentwich were you able to test the last script, any updates.

Let me know if you have any issues to reproduce it.

I'm trying to find a method to rotate the LF vectors, I will share the results asap.

A+

MaxNentwich commented 3 years ago

Hi Takfarinas, Sorry I was out of town for some time. I just checked the script you shared and there's a small issue when I run it.

There was an error when I was trying to plot the mesh in brainstorm after running your script. I think the issue is that there are no vertices in the tess_fem_roast_0V.mat file.

tess_fem_roast_0V.zip

I think something went wrong in the call to cs_convert in line 92 of your script. 'sMri' and 'no' seem file, but the output is empty. Are there different versions of cs_convert that I might have mixed up? I also uploaded the 'sMri' and 'no' variables. Maybe they are not what they should be?

sMri_no.zip

I also wanted to ask if I have to define the fiducials when the MriViewer comes up or if that is taken care of? Thanks, Max

tmedani commented 3 years ago

Hi Takfarinas, Sorry I was out of town for some time. I just checked the script you shared and there's a small issue when I run it.

There was an error when I was trying to plot the mesh in brainstorm after running your script. I think the issue is that there are no vertices in the tess_fem_roast_0V.mat file.

tess_fem_roast_0V.zip

Hi Max,

Thanks for your reply.

I also wanted to ask if I have to define the fiducials when the MriViewer comes up or if that is taken care of?

Yes you need to set the fiducials point of the MRI editor and then save them as explained in this page https://neuroimage.usc.edu/brainstorm/Tutorials/ImportAnatomy#:~:text=this%20figure%3A%20CoordinateSystems-,Fiducial%20points,EEG%20sensors%20on%20the%20MRI.

I think this is the problem that causes all the other errors.

In the MRI that you shared there is no fiducials points.

>> sMri.SCS

ans = 

  struct with fields:

    NAS: []
    LPA: []
    RPA: []
      R: []
      T: []

These points are required by the function cs_convert, otherwise there in no outputs.

Could you set these points on the MRI editor and then save the result as explained, from the GUI.

Thanks

MaxNentwich commented 3 years ago

Thanks for your help! Defining the fiducials was the solution. I can now fully reproduce your alignment of the headmodel to the MRI:

registered_mesh_mri

Let me know if there is anything I can help with at the current step.

tmedani commented 3 years ago

@MaxNentwich this is great that you are able to reproduce it and check it.

It will be nice if you can add the transformation of the electrodes and the cortex to this script, and then load them automatically to the Brainstorm protocol as it's the case for the head model.

In meantime, I will work on the transformation of the LF and share it with you asap.

stefanhaufe commented 3 years ago

Dear all,

please have a look at the attached slightly modified script (lines 61-83). The manual entering of fiducials can be completely avoided when running "Compute MNI transformation" process. The sole purpose of running this process is to obtain reasonable estimates for the fiducials automatically.

bst_roast_protocol_SH.txt

MaxNentwich commented 3 years ago

Thanks Stefan for adding this modification!

@tmedani, I can work on adding the other parts to brainstorm. I wanted to ask, can this be done with the db_add() function for the channels and db_add_surface() for the cortex? I previously just replaced the mat files in the database manually.

tmedani commented 3 years ago

Thanks @stefanhaufe, I will check it asap.

@MaxNentwich yes, you can do it for the cortex with the db_add_surface and it's also possible for the channel, with a similar function, I will check it also.

MaxNentwich commented 3 years ago

Hi all,

I managed to add the cortex and channel file to the protocol. I used SCS.R and SCS.T from the converted sMri variable to convert them to match the headmodel. It looks close, but the alignment is a little off:

cortex_alignment

I would appreciate if you have any suggestions to improve that.

bst_roast_protocol_MN.txt

tmedani commented 3 years ago

Hi Max, looks good from this view and from my point of view,

Why do you think it's the alignment is off?

Could you remove from the display the outer tissues and keep only the GM and WM, you should have perfect alignment.

Did you use Stefan's MNI code?

I will test all these scripts Thursday afternoon, I'm overwhelmed on another topic these days.

tmedani commented 3 years ago

@MaxNentwich I don't know if you found the function that imports the channel to the database, I checked how to do it automatically, and I think we need to use the brainstorm function

import_channel

For that we need to passe the dataeither by one of its input files or by memory (we may need to hack this function),

For the first option, I think the CSV file is easy to export from Matlab, as in the attached example,

EEG10-10_UI_Jurak_2007.zip

image

Maybe it's the most appropriate since it is possible to add the fiducials (from the MNI transformation) and the reference electrode.

Or we can use directly this function that passes the data by the ChannMat structure without intermediate files db_set_channel that can add ChannMat data to brainstorm.

Let me know if you were able to do it, or should I test these two scenarios (on Thursday)

Best,

MaxNentwich commented 3 years ago

Hi Takfarinas, You are right the alignment looks good! I thought is looked like it was shifted down, but that's not the case. plotted on the gray and white matter it looks all fine: alignment_cortex_fem

I imported the channels with db_add(). I used the format of the structure you have created in your first script and manually transformed the coordinated with the SCS matrix. alignment_channels_fem

For both I used the SCS matrix that was computed with Stefan's MNI code.

Let me know if this looks alright to you.

tmedani commented 3 years ago

Hi Max,

This looks great, I think it's correct.

Now, I assume that all this can be done automatically now just by running the script right?

It will create the protocol, create the subject, and then load and align the head mesh, cortex mesh, and the electrodes, right?

One first improvement is to change the name of the channel to the original names instead of ch_xx, you have just to get the original name from the excel file, already loaded by roast, and then apply the same reordering.

The next step is loading the LF matrix, and align it to the SCS coordinates, I will work on that soonish.

We may also think to downsample the cortex and the LF and interpolation the value of the LF to the nearest node as Stefen did it before.

And then if I'm right, we have all the steps done, tell me if I missed something.

Could you please share the latest script?

Thanks :)