I tried to implement ToMe into the image encoder in CLIP model. However, the ViT in CLIP uses nn.MultiheadAttention, which I couldn't modify the forward process. I wonder if you have any ideas on how to implement ToMe to original CLIP models? Thanks!
There's currently a PR for this: #21. The long and short of it is that editing the attn layer is not necessary--it just improves performance. You can try without that modification.
I tried to implement ToMe into the image encoder in CLIP model. However, the ViT in CLIP uses nn.MultiheadAttention, which I couldn't modify the forward process. I wonder if you have any ideas on how to implement ToMe to original CLIP models? Thanks!