so input could be a 64x64 image and a vector of real numbers. Those numbers could be fed into a layer 'after' the convolution. This is common for multi-sensor setups.
So, learning to compress EEG data to a latent space and then regenerate it would give you good features for downstream tasks. I wonder if a word vectors method has been used for EEG.
You can include other data outside
so input could be a 64x64 image and a vector of real numbers. Those numbers could be fed into a layer 'after' the convolution. This is common for multi-sensor setups.
So, learning to compress EEG data to a latent space and then regenerate it would give you good features for downstream tasks. I wonder if a word vectors method has been used for EEG.