Deep Networks to explain the human visual brain - Algonauts Challenge 2019
Project Description
I was thinking if anybody here was interested in participating in the Algonauts Challenge 2019. The goal of the challenge is to train a deep network that can best explain the human visual cortex response to certain natural image stimuli. The dataset is provided by the challenge organizers (I have it already and can share it with anybody interested - if you don't want to register on the challenge webpage and download the data from there).
The dataset provided consists of the following :-
Images that were shown to subjects (stimuli)
Representation dissimilarity matrices for each subject calculated from fMRI responses
Representation dissimilarity matrices for each subject calculated from MEG responses
The challenge is to predict the response (match the representation dissimilarity matrix) of the early visual cortex and the late visual cortex (mostly around IT region).
Skills required to participate
Possibly anyone can participate but having the following skills may come in handy.
Python
Some basic Deep Learning knowledge (CNNs)
Knowledge of the visual cortex - basic ideas would be really helpful to decide cool strategies for the challenge
Knowledge of fMRI and MEG not super critical because the data is in the format of dissimilarity matrices!!
Integration
The project lies in the intersection of deep learning and neuroscience. It basically tries to answer the question that can we use deep learning models to explain the activity of the brain. Since the project deals with 2 regions (early and late visual cortex) and 2 modalities (fMRI and MEG), I am expecting the project to be fairly parallel. The major milestones would be
[ ] Train a model's final feature layer to explain late visual cortex - fMRI
[ ] Train a model's initial feature layers to explain early visual cortex - fMRI
[ ] Train a model's final feature layer to explain late visual cortex activity - MEG
[ ] Train a model's initial feature layers to explain early visual cortex activity - MEG
[ ] Train a model jointly to optimize for early and late activity
Deep Networks to explain the human visual brain - Algonauts Challenge 2019
Project Description
I was thinking if anybody here was interested in participating in the Algonauts Challenge 2019. The goal of the challenge is to train a deep network that can best explain the human visual cortex response to certain natural image stimuli. The dataset is provided by the challenge organizers (I have it already and can share it with anybody interested - if you don't want to register on the challenge webpage and download the data from there). The dataset provided consists of the following :-
The challenge is to predict the response (match the representation dissimilarity matrix) of the early visual cortex and the late visual cortex (mostly around IT region).
Skills required to participate
Possibly anyone can participate but having the following skills may come in handy.
Integration
The project lies in the intersection of deep learning and neuroscience. It basically tries to answer the question that can we use deep learning models to explain the activity of the brain. Since the project deals with 2 regions (early and late visual cortex) and 2 modalities (fMRI and MEG), I am expecting the project to be fairly parallel. The major milestones would be
Preparation material
Challenge website
Link to your GitHub repo
https://github.com/arnaghosh/ohbm2019-algonauts
Communication
Not setup anything yet, but probably will do it soon :) Depends on what people interested in participating are most comfortable in