FlagOpen / FlagEmbedding

Retrieval and Retrieval-augmented LLMs
MIT License
7.68k stars 558 forks source link

How to continue finetune Visualized BGE #968

Open CthulhuAIFrenzy opened 4 months ago

CthulhuAIFrenzy commented 4 months ago

I am currently working on a project that involves finetuning Visualized BGE. I have been able to successfully use the pretrained model, but now I would like to further finetune it for my specific use case.

  1. Could you please provide detailed instructions or guidelines on how to continue finetuning the Visualized BGE model?
  2. Will you be open-sourcing the training scripts used for Visualized BGE in the near future? This would be incredibly helpful for understanding the exact training procedure and for reproducing the results. Thank you for your assistance and for developing such a powerful tool. Looking forward to your response.

Best regards,

JUNJIE99 commented 4 months ago

May I ask what your specific downstream task is? For instance, what are the modalities of the query and candidate? You can use the contrastive learning method to fine-tune Visualized BGE, and I can provide you with a not fully cleaned Stage-2 training code.

CthulhuAIFrenzy commented 4 months ago

I want to try using multimodal fusion retrieval in product search and also try reranking with multimodal fusion retrieval to improve the rank-1 accuracy in rank-N candidates.

JUNJIE99 commented 4 months ago

I can provide you with the original core training code for the stage 2 training process, which corresponds to multi-modal training in our paper. If needed, feel free to reach out to zhoujunjie [at] bupt [dot] edu [dot] cn.