Open ShreyaKapoor18 opened 5 days ago
Hi @ShreyaKapoor18
Thanks for reaching out. Yes, there is a code example (example.ipynb). It train a brain prediction model, and show brain mapping and score. The default model in the notebook is DiNOv2, you could swap the backbone to other 6 ViT models implemented in this repository (imported in notebook cell #3) The training for 1 model took 30min to run on a RTX4090.
Let me know if that is what you are looking for!
Hi @huzeyann
Thanks for your reply. I got it now. I had another question I went to Table 2 of the paper """ Table 2. Layer Selectors, Brain-Network Alignment. Brain-network alignment is measured by slope and intersection of linear fit (defined in Section 4.3). Larger slope means generally better alignment with the brain, smaller b0 means better alignment of early layers, and larger b1 means better alignment of late layers. R2 is brain score. Bold marks the best within the same model. Insights: 1) CLIP’s alignment to the brain improves with larger model capacity, 2) for all others, bigger models decrease the brain-network hierarchy alignment """ Can you tell me how the slope was computed?
Best regards, Shreya Kapoor
Hi @ShreyaKapoor18
here is a toy example on how the slop is computed:
given 4 voxels from V1, V2, OPA, EBA. 12 layer model
the ideal layer selection weight is
ideal = [
[0.33, 0.33, 0.33, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0.33, 0.33, 0.33, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.33, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.33],
]
this weights means V1 voxels selects first 3 layers, EBA voxels select last 3 layers
the model learned layer selection weight is
layer_sel = torch.rand(4, 12)
then we used linear fit to compute slope
from sklearn.linear_model import LinearRegression
model = LinearRegression().fit(ideal.reshape(-1, 1), layer_sel.reshape(-1, 1))
slope = model.coef_[0][0]
Thanks for the above code, it is beneficial! I wanted to also ask how do we get the activation in each voxel? from here? fsaverage = np.zeros(163842 * 2) https://github.com/huzeyann/BrainDecodesDeepNets/blob/cf637220d0789da772b2ad0849e90c426b5cf071/brainnet/plot_utils.py#L264
Does it mean that you count activations for 163842 voxels?
Best regards, Shreya
Hi @ShreyaKapoor18
here is a toy example on how the slop is computed:
given 4 voxels from V1, V2, OPA, EBA. 12 layer model
the ideal layer selection weight is
ideal = [ [0.33, 0.33, 0.33, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0.33, 0.33, 0.33, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.33, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.33], ]
this weights means V1 voxels selects first 3 layers, EBA voxels select last 3 layers
the model learned layer selection weight is
layer_sel = torch.rand(4, 12)
then we used linear fit to compute slope
from sklearn.linear_model import LinearRegression model = LinearRegression().fit(ideal.reshape(-1, 1), layer_sel.reshape(-1, 1)) slope = model.coef_[0][0]
But this does not work with plmodel?
Hi @ShreyaKapoor18
The part of code you have pointed out is not the voxel activation, but plotting function. Please checkout https://gallantlab.org/pycortex/ for details on plotting The voxel activation is read in dataset.py, please check example.ipynb on how to download the dataset.
Hi @ShreyaKapoor18 here is a toy example on how the slop is computed: given 4 voxels from V1, V2, OPA, EBA. 12 layer model the ideal layer selection weight is
ideal = [ [0.33, 0.33, 0.33, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0.33, 0.33, 0.33, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.33, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.33], ]
this weights means V1 voxels selects first 3 layers, EBA voxels select last 3 layers the model learned layer selection weight is
layer_sel = torch.rand(4, 12)
then we used linear fit to compute slope
from sklearn.linear_model import LinearRegression model = LinearRegression().fit(ideal.reshape(-1, 1), layer_sel.reshape(-1, 1)) slope = model.coef_[0][0]
But this does not work with plmodel?
hierarchy slope compute part is not yet released in this code repository
Hi @ShreyaKapoor18 here is a toy example on how the slop is computed: given 4 voxels from V1, V2, OPA, EBA. 12 layer model the ideal layer selection weight is
ideal = [ [0.33, 0.33, 0.33, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0.33, 0.33, 0.33, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.33, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.33], ]
this weights means V1 voxels selects first 3 layers, EBA voxels select last 3 layers the model learned layer selection weight is
layer_sel = torch.rand(4, 12)
then we used linear fit to compute slope
from sklearn.linear_model import LinearRegression model = LinearRegression().fit(ideal.reshape(-1, 1), layer_sel.reshape(-1, 1)) slope = model.coef_[0][0]
But this does not work with plmodel?
hierarchy slope compute part is not yet released in this code repository
No problem. Thanks! It would be great to know if it will be released!
Hi @ShreyaKapoor18
The part of code you have pointed out is not the voxel activation, but plotting function. Please checkout https://gallantlab.org/pycortex/ for details on plotting The voxel activation is read in dataset.py, please check example.ipynb on how to download the dataset.
Hi, thanks for your reply. I was wondering if the model predicts the activation in each voxel? plmodel.fit() brain_value = plmodel(input_img) is this the activation per voxel?
Since the plmodel should predict the activation in each voxel, then we should be able to extract
Hi @ShreyaKapoor18
Yes, the model predicts activation in each voxel. the brain_value
in your example is activation per voxel (37,000 voxels)
Hi, @huzeyann Thanks for your reply and so much help before. Do you know if there is a direct way to map the voxels to a particular region? For example, I have the brain values from the above code, can I know which brain values belong to which brain region? For example which brain values correspond to V1, V2, IT and V4?
Best regards, Shreya
def cluster_channels(weights, target_num_rois=20) https://github.com/huzeyann/BrainDecodesDeepNets/blob/cf637220d0789da772b2ad0849e90c426b5cf071/brainnet/clustering.py#L11
Is it one of the functions here? I will try them out
Hi @huzeyann
Thanks for this repo. I went through your work and was wondering if you could provide some code where you score the vision transformers on brainscore?
Is it available? Were you able to score the vision transformers locally?
Best regards, Shreya