mlfoundations / open_clip

An open source implementation of CLIP.
Other
9.29k stars 923 forks source link

Attention Maps Visualization #795

Open TahaKoleilat opened 6 months ago

TahaKoleilat commented 6 months ago

How can I get the attention weights from an open_clip model? The clip library had the option to allow attention weights by setting need_weigths to True, and then the attention weights are returned in model.encode_image. How can this be done with this library?