Closed usuyama closed 1 year ago
i know @sagadre has for the OAI ViT models at least, but i don't think the open_clip ones, he'll know more
Yeah! Check out this notebook from Hila Chefer: https://github.com/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP_explainability.ipynb
Found this to qualitatively work pretty well on OAI ViT B/32 model!
Has anyone tried saliency map visualizations with open_clip models?
I came across these examples, but they only use OpenAI ResNet-based models.
https://colab.research.google.com/github/kevinzakka/clip_playground/blob/main/CLIP_GradCAM_Visualization.ipynb https://huggingface.co/spaces/njanakiev/gradio-openai-clip-grad-cam