Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.71k stars 176 forks source link

Can I load instructblip and finetune? #47

Open hubei-peng opened 1 year ago

hubei-peng commented 1 year ago

Hi, I want to load instructblip model (because it is trained on many datasets) and finetune it using this repo. But instructblip model are in huggingface format, how can I load it? Thanks!

Enderfga commented 1 year ago

https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/d771fe2a8ea9aa6fd7cfd32fc6d436ea32586326/accessory/model/LLM/llama_qformerv2.py#L274 I think what you said is an interesting idea, but as far as I know, Instructblip has not yet open-sourced the official fine-tuning code (please correct me if I am wrong). Currently, the Blip2 we are using is actually in Hugging Face format, maybe you can start from here, wish you good luck.