KamitaniLab / bdpy

Python package for brain decoding analysis (BrainDecoderToolbox2 data format, machine learning analysis, functional MRI)
MIT License
33 stars 22 forks source link

Support attention layer (mlp) #76

Open kencan7749 opened 10 months ago

kencan7749 commented 10 months ago

https://github.com/KamitaniLab/bdpy/blob/ec5afa9ce818b667fc85678a75861c84d15c0f27/bdpy/dl/torch/torch.py#L85

I'd like to use this feature extractor for standard ViT (CLIP) model. I found that the naive output of attention layer is tuple shape (activation, None). Since this feature extractor is also used in icnn.py, it will raise error when we perform reconstruction analysis using attention layer. One way to avoid this issue is just selecting the first element when the output is a tuple.

if type(output) is tuple:
    features[layer] = output[0]
else:
    features[layer] = output