hila-chefer / Transformer-MM-Explainability

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
MIT License
801 stars 107 forks source link

fix memory leak #35

Open lenbrocki opened 1 year ago

lenbrocki commented 1 year ago

In this notebook for creating the explanations for ViT there was a memory leak in the function generate_relevance. This PR fixes that.