eric-ai-lab / PEViT

Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"
MIT License
94 stars 5 forks source link

Inquiry about KAdaptation Implementation for Different ViTs (e.g., DinoV2) #9

Open MarioAvolio opened 7 months ago

MarioAvolio commented 7 months ago

Hello,

I am currently working on a project that involves the application of the KAdaptation technique, as detailed in the paper "Parameter-efficient Model Adaptation for Vision Transformers" , to various Vision Transformer models. My focus is particularly on models like DinoV2 and other similar pretrained transformers.

I have been exploring your repository and found it extremely insightful for my research. However, I couldn't locate any specific references or implementations related to the application of KAdaptation on other types of Vision Transformers, like DinoV2.

Could you kindly inform me if there are any existing implementations of KAdaptation that are compatible with DinoV2 or other pretrained transformer models? Additionally, any guidance on adapting KAdaptation to these models would be greatly appreciated.

This information would be immensely beneficial for my ongoing project, and I believe it could also aid others in the community working with similar models and adaptations.

Thank you for your time and assistance.

Best regards, Mario

jkooy commented 7 months ago

Hi, I haven't tried DinoV2, but I believe it works on other types of ViT as long as it has a similar architecture. Dino mainly just changed the training objective. I actually tried Kadaptation for image generation recently, and the method also works. You can refer to the implementation here https://github.com/eric-ai-lab/PEViT/blob/master/vision_benchmark/evaluation/model.py, should work similarly for Dino. Thank you!