Closed Yang4096 closed 4 days ago
Hi @Yang4096,
Thank you for showing interest in PromptSRC!
Regarding your question, unfortunately we do not have the EVA CLIP based PromptSRC source code files at the moment.
If you would like to incorporate training using EVA-CLIP, kindly note the following:
1) Firstly, we need to replace the original CLIP code with the EVA-CLIP code (https://github.com/baaivision/EVA/tree/master/EVA-CLIP) in our repository.
2) You would then need to properly load the EVA CLIP model in the PromptSRC trainer code (trainers/promptsrc
) in the below function.
https://github.com/muzairkhattak/PromptSRC/blob/bb95c77b634d63488f2cad81ff4a72d53bdd06d5/trainers/promptsrc.py#L19
3) Following the prompt learning code in https://github.com/muzairkhattak/PromptSRC/blob/main/clip/model.py, you would need to embed the same in the EVA CLIP model.py
Once set up, you can run the same PromptSRC repository to train PromptSRC with the EVA CLIP foundation model. I hope that is helpful. Let us know if this resolved your query.
Sincerely, Muhammad Uzair
Hi @Yang4096,
I hope your query is resolved. I am now closing the issue. Feel free to reopen it in case there are still any questions.
Thank you and kind regards!
Dear Authors,
I have read your paper with great interest, particularly your work with the newly introduced VL model, EVA-CLIP (CVPR’23). I am currently working on a project that involves foundational VL models and I am very keen to understand how to run your model on EVA-CLIP.
Could you kindly provide the code or any detailed instructions on how to implement and fine-tune your approach on EVA-CLIP? Specifically, I am interested in the IVLP and PromptSRC prompting approaches you employed.
Thank you in advance.![eva](https://github.com/muzairkhattak/PromptSRC/assets/56780900/a18f4e89-168f-49a5-af55-4d82a2a7a471)