Here we provide the code to reproduce the results of our data resource paper: "A large and rich EEG dataset for modeling human visual object recognition". Alessandro T. Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw M. Cichy
If you experience problems with the code, please create a pull request or report the bug directly to Ale via email (alessandro.gifford@gmail.com).
Please visit the dataset page for the data, paper, dataset tutorial and more.
Here you will find some useful videos on our EEG dataset.
To run the code first install Anaconda, then create and activate a dedicated Conda environment by typing the following into your terminal:
curl -O https://raw.githubusercontent.com/gifale95/eeg_encoding_model/main/environment.yml
conda env create -f environment.yml
conda activate eeg_encoding
Alternatively, after installing Anaconda you can download the environment.yml file, open the terminal in the download directory and type:
conda env create -f environment.yml
conda activate eeg_encoding
The source, raw and preprocessed EEG dataset, the training and test images and the DNN feature maps are available on OSF. The ILSVRC-2012 validation and test images can be found on ImageNet. To run the code, the data must be downloaded and placed into the following directories:
../project_directory/eeg_dataset/source_data/
.../project_directory/eeg_dataset/raw_data/
.../project_directory/eeg_dataset/preprocessed_data/
.../project_directory/image_set/
.../project_directory/dnn_feature_maps/pca_feature_maps
.Here you will find a Colab interactive tutorial on how to load and visualize the preprocessed EEG data and the corresponding stimuli images.
If you use any of our data or code, partly or as it is, please cite our paper:
Gifford AT, Dwivedi K, Roig G, Cichy RM. 2022. A large and rich EEG dataset for modeling human visual object recognition. NeuroImage, 264:119754. DOI: https://doi.org/10.1016/j.neuroimage.2022.119754