raghakot / keras-vis

Neural network visualization toolkit for keras
https://raghakot.github.io/keras-vis
MIT License
2.98k stars 660 forks source link

[question] electrode contribution in EEG input #218

Open lnalborczyk opened 4 years ago

lnalborczyk commented 4 years ago

Hi all,

let me first thank the developer(s) for creating and maintaining keras-vis :clap:

I have a newbie question about the best way to visualise the most useful parts of my input data for a regression task.

I have input EEG data in the form of 5D tensor of shape (N_examples, 94_time_frames, 16_electrodes, 16_electrodes, 26 frequency bins) which I'm using to predict a 3D target of shape (N_examples, 94 time frames, 40 frequency bins). Each output is a mel-spectrogram (i.e., a spectral representation of a speech signal). I have trained the following model (a ConvLSTM) by minimising the MSE between the original and predicted spectrogram.

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv_lst_m2d_52 (ConvLSTM2D) (None, 94, 16, 16, 16)    10816     
_________________________________________________________________
batch_normalization_49 (Batc (None, 94, 16, 16, 16)    64        
_________________________________________________________________
leaky_re_lu_45 (LeakyReLU)   (None, 94, 16, 16, 16)    0         
_________________________________________________________________
dropout_36 (Dropout)         (None, 94, 16, 16, 16)    0         
_________________________________________________________________
conv_lst_m2d_53 (ConvLSTM2D) (None, 94, 16, 16, 32)    24704     
_________________________________________________________________
batch_normalization_50 (Batc (None, 94, 16, 16, 32)    128       
_________________________________________________________________
leaky_re_lu_46 (LeakyReLU)   (None, 94, 16, 16, 32)    0         
_________________________________________________________________
dropout_37 (Dropout)         (None, 94, 16, 16, 32)    0         
_________________________________________________________________
time_distributed_60 (TimeDis (None, 94, 8192)          0         
_________________________________________________________________
time_distributed_61 (TimeDis (None, 94, 40)            327720    
=================================================================
Total params: 363,432
Trainable params: 363,336
Non-trainable params: 96
_________________________________________________________________

From there, I would like to identify / visualise the "most useful" electrodes in my 16-by-16 grid of electrodes for (correctly) predicting my output, that is, for correctly predicting the power in some frequency bin and some time frame (i.e., which electrode contribute the most in reducing the error).

My approach has been to use visualize_activation as follows:

activation_map = visualize_activation(
    model, layer_idx = -1, filter_indices = None
    )

and then to average this idealised input over time frames and frequency bins to obtain a 16-by-16 matrix (representing my grid of electrodes).

However, I'm a bit unsure that this really gives me what I'm really looking for...

Does someone have any idea or suggestion?

Thanks