Closed sjbt007 closed 2 years ago
Hi, I faced wth a problem when I used ConvDip in tutorial 2 on sample data. Could you give me some advice or tips about how to fix it?
The error says that some or all sensor positions of your MEG system (e.g., MEG 0113) are the same. This makes the code throw an error at the "_find_topomap_coords function" function, which makes sense because correct sensor positions are a precondition for ConvDip.
You can check whether you have correct sensor positions in your mne objects by plotting them.
If your positions are in deed faulty and you don't have the positions anywhere, you can try out standard montages using predefined sensor locations.
And, I am very interested in your research and paper. I think it meanful and creative. I like it and want to research it more deeply.
Thank you very much, I appreciate your kind words!
But I didn't find the dataset in this repository. Could you release the code about how to simulate the source dataset and target dataset in your paper ConvDip
The dataset is not included in the esinet repository since it can be simply recreated using the esinet.Simulation object. This would be the correct settings for ConvDip:
from esinet import Simulation
settings = dict(
duration_of_trial=0.0,
extents=(21, 58),
number_of_sources = (1,5),
amplitudes = (1,10),
target_snr = 4.5
)
n_samples = 100000
sim = Simulation(fwd, info, settings=settings)
sim.simulate(n_samples=n_samples)
and the test code about ConvDip, cMEM, eLoreta and LCMV? I would be very grateful if you could. I dont have organized code for the cMEM inverse solutions since that required the export of data from mne-python to brainstorm (matlab), which was quite fiddly. For the others, I used the following code:
from esinet.util import mne_inverse
from esinet import Simulation
# Simulate data
settings = dict(
duration_of_trial=0.0,
extents=(21, 58),
number_of_sources = (1,5),
amplitudes = (1,10),
target_snr = 4.5
)
n_samples = 100000
sim = Simulation(fwd, info, settings=settings)
sim.simulate(n_samples=n_samples)
# Calc eLORETA inverse solutions
simulated_epochs = sim.eeg_data
sources_eloreta = []
method = "eLORETA" # can also be "beamformer", "MNE", "dSPM"
for epochs in simulated_epochs:
sources = mne_inverse(fwd, epochs, method=method, snr=3.0,
baseline=(None, None), rank='info', weight_norm=None,
reduce_rank=False, inversion='matrix', pick_ori=None,
reg=0.05, regularize=False,verbose=0)
sources_eloreta.append(sources)
# Then you can plot it like so:
idx = 0
sources[idx].plot()
Hope this helps! By the way, ConvDip is not our most recent work on this topic, you can check out our preprint!
Hi Hecker, At first, Thank you a lot for you patient and detailed reply, which is very useful and helpful for me. I am sorry for disturb you. But there are still some questions about it.
First, could you tell me more about the detail of 'fwd' and 'info' you used in the ConvDip. I see there are many datasets and corresponding 'fwd.fif' in the MNE datasets. Could you let me know the two fwd documents and info you used for the train dataset 'GM' model and test dataset 'AGM' model in your paper?
What's more, I think your idea about how to define the positive and negative sample in AUC is very interesting and meanful. I try to find the related code about it, but it doesn't seem to be in this repository. Could you release the code about how to evaluate the ConvDip or other model by AUC combined by close and far AUC?
At last, it is creative to use two different generative models to avoid the inverse crime. In fact, as I know, many researchers don't think about this question when they study in ESI. So I am interested in it. In the paper, you used the K nearest-neighbor interpolation and normalization to transform source vector in AGM to source in GM. Could you tell me the code about this part for ConvDip?I'm curious about how to handle and evaluate the result of different number dipoles for a same neural network.
I am very appreciated for your patience and eagerness help. And I am happy to follow your recent work on ESI. If I could, I will cite them as the Important references. Looking for your reply.
Hi Hecker, At first, Thank you a lot for you patient and detailed reply, which is very useful and helpful for me. I am sorry for disturb you. But there are still some questions about it.
First, could you tell me more about the detail of 'fwd' and 'info' you used in the ConvDip. I see there are many datasets and corresponding 'fwd.fif' in the MNE datasets. Could you let me know the two fwd documents and info you used for the train dataset 'GM' model and test dataset 'AGM' model in your paper?
You produce the 'GM' and 'AGM' like this:
from esinet.forward import create_forward_model, get_info
from esinet.util import unpack_fwd
info = get_info()
# GM
fwd_gm = create_forward_model(sampling='ico4')
# AGM
fwd_agm = create_forward_model(sampling='oct6')
The actual mne.Info object that was used in the paper contained our custom EEG layout from the final source analysis on real data shown in Appendix C.
What's more, I think your idea about how to define the positive and negative sample in AUC is very interesting and meanful. I try to find the related code about it, but it doesn't seem to be in this repository. Could you release the code about how to evaluate the ConvDip or other model by AUC combined by close and far AUC?
Yes. This is a full working example:
from esinet.forward import create_forward_model, get_info
from esinet import Simulation
from esinet import Net
from esinet.evaluate import eval_auc
from esinet.util import unpack_fwd
# Create Forward model
info = get_info()
fwd = create_forward_model(sampling='ico1')
pos = unpack_fwd(fwd)[2]
# Simulate data
sim = Simulation(fwd, info).simulate(20)
# train neural network
net = Net(fwd, n_dense_layers=1, n_lstm_layers=0, n_dense_units=1).fit(sim)
# Make predictions of the training data
stcs = net.predict(sim)
# Get true source vector of first sample and first time point
sample_idx = 0
time_idx = 0
y_true = sim.source_data[sample_idx].data[:, time_idx]
y_est = stcs[sample_idx].data[:, time_idx]
# Important code is here:
# Calculate the auc_close and auc_far between the two source vectors:
auc_close, auc_far = eval_auc(y_true, y_est, pos, n_redraw=25, epsilon=0.25,
plot_me=False)
print(f'AUC_close: {auc_close}, AUC_far: {auc_far}')
Note, that the n_redraw parameter needs to be set higher (10^3 or 10^4) for consistent results.
At last, it is creative to use two different generative models to avoid the inverse crime. In fact, as I know, many researchers don't think about this question when they study in ESI. So I am interested in it. In the paper, you used the K nearest-neighbor interpolation and normalization to transform source vector in AGM to source in GM. Could you tell me the code about this part for ConvDip?I'm curious about how to handle and evaluate the result of different number dipoles for a same neural network.
I can't find the interpolation function right now, it may be on another computer. I have only ever used this once for this publication and never went back. What I can tell you is that you need the following snippet:
from esinet.forward import create_forward_model, get_info
from esinet.util import unpack_fwd
from scipy.spatial.distance import cdist
# Create Forward model
info = get_info()
fwd_gm = create_forward_model(sampling='ico4')
fwd_agm = create_forward_model(sampling='oct6')
# Get the dipole position
pos_gm = unpack_fwd(fwd_gm)[2]
pos_agm = unpack_fwd(fwd_agm)[2]
# Get the distance matrices between each pair of dipoles
distance_matrix_gm = cdist(pos_gm, pos_gm)
distance_matrix_agm = cdist(pos_agm, pos_agm)
From there it should be easy to find K nearest neighbors of the source space you want to convert.
I am very appreciated for your patience and eagerness help. And I am happy to follow your recent work on ESI. If I could, I will cite them as the Important references. Looking for your reply.
Thank you, I am happy to help
Hi, I faced wth a problem when I used ConvDip in tutorial 2 on sample data. Could you give me some advice or tips about how to fix it? And, I am very interested in your research and paper. I think it meanful and creative. I like it and want to research it more deeply. But I didn't find the dataset in this repository. Could you release the code about how to simulate the source dataset and target dataset in your paper ConvDip and the test code about ConvDip, cMEM, eLoreta and LCMV? I would be very grateful if you could.