Original paper https://arxiv.org/pdf/2208.12266.pdf
Mel paper https://arxiv.org/abs/2006.11477
Wav2vec model and paper
Code on Colab https://colab.research.google.com/github/sccn/sound2meg/blob/main/Spatial_Attention.ipynb
https://data.donders.ru.nl/collections/di/dccn/DSC_3011220.01_297
https://www.nature.com/articles/s41562-022-01516-2?utm_content=animation
https://arxiv.org/abs/2007.16104v1
https://www.frontiersin.org/articles/10.3389/fnhum.2021.653659/full Arno: We could do this for our large corpus of child data (3000 subjects)
https://sites.google.com/view/stablediffusion-with-brain/?s=09
https://mind-vis.github.io Abdu: This is similar to the stable diffusion one that I came across a while back. It seems to be using a more complicated model, but it also used fMRI.
https://hal.inria.fr/hal-03808304 Abdu: I haven't looked into this in detail but it seems to be some network encoding MEG signals for better classification (I'm guessing kind of like wav2vec but for brain data?). The code seems to be open source at https://github.com/facebookresearch/deepmeg-recurrent-encoder so we can experiment with some new ideas.