mne-tools / mne-python

MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python
https://mne.tools
BSD 3-Clause "New" or "Revised" License
2.73k stars 1.33k forks source link

Edge artefacts in ReceptiveField #4253

Closed nbara closed 7 years ago

nbara commented 7 years ago

I chatted briefly with @choldgraf about this issue today, but I've been giving the new ReceptiveField code a try on some old data today. And I'm getting strange artefacts on the reconstructed signal.

Here I've been using the stimulus reconstruction approach (backward model) to predict my stimulus from the EEG. The data are 10s epochs, and I have about 50-70 of those.

Here's what the reconstruction looks like (all the reconstructed epochs are concatenated inside the predict method). X axis is time samples @ 256Hz. figure_3 figure_2 figure_1

The artefact is located in the first len(time_lags) samples of each epoch, so I assume it has to do with how the delays are handled in predict.

larsoner commented 7 years ago

Does this also happen when you pass Ridge as your solver? That will tell us whether @choldgraf or I am to blame :)

choldgraf commented 7 years ago

good point @Eric89GXL - for context, @nbara, these two keywords will run very different pipelines.

nbara commented 7 years ago

Actually I always pass Ridge as my solver. Didn't realize it was different from just giving the alpha value.

rf = ReceptiveField(lmin, lmax, fs, feature_names=None,
                    estimator=Ridge(alphas[ii]),
                    scoring='corrcoef',
                    fit_intercept=False)

EDIT: passing a float seems to work actually. At least there's no visible artefacts at the edges.

larsoner commented 7 years ago

Can you try just passing alpha?

nbara commented 7 years ago

passing a float seems to work actually. At least there's no visible artefacts at the edges.

larsoner commented 7 years ago

Is it easy to share some data to reproduce?

Do you have time to look @choldgraf?

nbara commented 7 years ago

Sure! Here are the files:

https://www.dropbox.com/s/q1tdh5ojaxof7kt/test-epo.fif?dl=0 https://www.dropbox.com/s/5dzdp0bvpgv6ink/stimulus.npy?dl=0

The stimulus is already reshaped to be the same size as the epoched data. Let me know when you've downloaded the files so that I can remove them from my dropbox.

choldgraf commented 7 years ago

This is what I'm seeing in the stim data, is that correct?

(this is not predicted values, just raw data)

image

Can you share the code you use to generate the plots above?

choldgraf commented 7 years ago

Some quick thoughts:

choldgraf commented 7 years ago

one way to get around this right now is to simply reshape the output of predict so that it's the of shape (n_samples[, n_epochs], n_outputs). Then you can apply rf.keep_samples_ to the first dimension (which will be time). This will remove timepoints that have missing values.

nbara commented 7 years ago

This is what I'm seeing in the stim data, is that correct?

Correct, they're pixel luminance values :)

one way to get around this right now is to simply reshape the output of predict so that it's the of shape (n_samples[, n_epochs], n_outputs). Then you can apply rf.keepsamples to the first dimension (which will be time). This will remove timepoints that have missing values.

Yeah this is what I've done (well I've zeroed those values but the idea is the same)

larsoner commented 7 years ago

I noticed that in the call to fit, it will delay/reshape the data only if we're not using the TimeDelayingRidge instance, but that in predict it will delay/reshape regardless. Will that be a problem?

Problem in what way? It seems to be working okay so far I think.

one way to get around this right now is to simply reshape the output of predict so that it's the of shape (n_samples[, n_epochs], n_outputs)

The alternative would be to apply the model with an assumption of zeros (this is effectively what TimeDelayingRidge does). I wonder if all the work to remove samples is making more work than it's worth. We could also do something similar in fit by padding with zeros at the start and end (of each epoch).

choldgraf commented 7 years ago

Problem in what way? It seems to be working okay so far I think.

Because doesn't your estimator do the delaying internally? I was worried that we'd be delaying already delayed data.

We could also do something similar in fit by padding with zeros at the start and end (of each epoch).

We could do something like that...I guess in my mind I'm considering any of those edge datapoints as "invalid" anyway, so better to throw them away. However I see foresee this reshaping etc to be confusing to folks...

larsoner commented 7 years ago

Because doesn't your estimator do the delaying internally? I was worried that we'd be delaying already delayed data.

We'll have to check the code, I can't remember offhand. But the tests pass so I suspect it's working okay.

I guess in my mind I'm considering any of those edge datapoints as "invalid" anyway, so better to throw them away. However I see foresee this reshaping etc to be confusing to folks...

Yeah it's a decision about whether we want a "valid" or "full" type of operation (in terms of what np.convolve would do). I don't mind using "full" and letting people trim if necessary. It seems a bit simpler from a code standpoint. And it will unify the Ridge and TimeDelayingRidge code a bit.

choldgraf commented 7 years ago

so what changes to the codebase would we need to do then? Right now calling fit will remove edges as necessary before fitting the model, calling score will also remove edges before scoring, but calling predict will not remove any edges. An easy solution would be to add a remove_edges=True flag, but I get the feeling you'll think that's a little hacky? :)

choldgraf commented 7 years ago

although OTOH, if we add a "remove_edges=True" flag, then maybe we don't even need to worry about the masking from a user standpoint, it could be a hidden attribute. We could add an attribute that tells users how much the length of the new time dimension should be so they can easily reshape or something...

choldgraf commented 7 years ago

or we could add another flag to predict called vectorize=True that, if True, would reshape the predicted outputs to be the sklearn compatible 2D shape, and if False would retain the original shape

larsoner commented 7 years ago

Right now calling fit will remove edges as necessary before fitting the model, calling score will also remove edges before scoring, but calling predict will not remove any edges.

Basically no edges would be removed in any step. Zeros would be assumed outside the boundaries. I don't think this part would be too difficult.

Do you foresee problems with this approach? If not, it seems simplest and actually is what I'd expect as a user.

larsoner commented 7 years ago

... as for what predict returns, we can add a flag regardless of how we decide to treat the edges.

larsoner commented 7 years ago

Looking into this a bit now.

Because doesn't your estimator do the delaying internally? I was worried that we'd be delaying already delayed data.

@choldgraf looking into this deeper, I'm not sure what you mean because it looks like fit and predict both use self._delay_and_reshape(...). It's just that, in both cases, the call to _delay_and_reshape(...) won't really do any delaying when TimeDelayingRidge is used.

I'd like to rename keep_samples_ to valid_samples_ to mirror what np.convolve() and np.correlate call this mode / these samples, i.e., the valid ones.

I'm also fixing a but where, for TimeDelayingRidge with epochs, they were just concatenated. Now they are padded before concatenation to ensure that the tail end of epoch 0 does not contaminate the start of epoch 1 and so forth.

choldgraf commented 7 years ago

yo - sorry for slow response, there was the annual jupyter dev meeting at Berkeley this week and I got sucked into that all week long doing Binder stuff (which, by the way, is almost done: beta.mybinder.org).

On the delay and reshape, I must not have noticed the check for TimeDelayingRidge. Just found that...so LGTM.

For attribute renaming that sounds good with me, and for the fix that also sounds good.

larsoner commented 7 years ago

@nbara in the meantime can you make a minimal example showing that it works for float arguments but not when passing Ridge explicitly? I tried a little bit to replicate but got edge artifacts in both cases.

nbara commented 7 years ago

Hi Eric I'll try to do that as soon as possible (probably next week as I'm completely swamped atm)

nbara commented 7 years ago

@Eric89GXL Here is a gist showing the edge artefacts with Ridge(), and not visible artefacts when passing a float

https://gist.github.com/nbara/1bec801ec77790ce7d68575d4ff57e9d

Data is attached. edge.zip

Let me know if it there's anything else I can do.

larsoner commented 7 years ago

So I've dug into your code a bit and it looks like this is actually related to the low-passing of the EEG signal. It must cause some ugly rank-deficiency somewhere. Like this:

N = 256
fs = 1.
tmin, tmax = -50, 100
reg = 0.1
rng = np.random.RandomState(0)
eeg = rng.randn(N, 1, 1)
eeg *= 100
eeg = np.fft.rfft(eeg, axis=0)
eeg[N // 4:] = 0  # rank-deficient lowpass
eeg = np.fft.irfft(eeg, axis=0)
win = np.hanning(N // 8)
win /= win.mean()
y = np.apply_along_axis(np.convolve, 0, eeg, win, mode='same')
y += rng.randn(*y.shape) * 100

n_channels = eeg.shape[2]
n_epochs = eeg.shape[1]

plt.plot(y.ravel(), 'k', lw=3)

rf1 = ReceptiveField(tmin, tmax, fs, estimator=Ridge(reg))
rf1.fit(eeg, y)
pred = rf1.predict(eeg)
plt.plot(pred.ravel())
print(np.corrcoef(y.ravel(), pred.ravel())[0, 1])

rf2 = ReceptiveField(tmin, tmax, fs, estimator=reg)
rf2.fit(eeg, y)
pred = rf2.predict(eeg)
plt.plot(pred.ravel())
print(np.corrcoef(y.ravel(), pred.ravel())[0, 1])

screenshot from 2017-06-13 10-22-20

In any case, I'm looking into it now.