FlagOpen / FlagEmbedding

Retrieval and Retrieval-augmented LLMs
MIT License
7.01k stars 513 forks source link

any documents to explain how to finetuning #691

Open hjunjie0324 opened 5 months ago

hjunjie0324 commented 5 months ago

I follow the guidance to finetune the embedding model and reranker model in my tasks and get good performance. Big thanks to your guys! My question is that, is there any documents or paper to explain how you guys do finetune (and hard negative mining). I thought there is contrastive learning in it.

staoxiao commented 5 months ago

Yes, we use contrastive learning to optimize the model. You can refer to our paper:

@misc{bge_m3,
  title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation},
  author={Chen, Jianlv and Xiao, Shitao and Zhang, Peitian and Luo, Kun and Lian, Defu and Liu, Zheng},
  year={2023},
  eprint={2309.07597},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}

@misc{cocktail,
      title={LM-Cocktail: Resilient Tuning of Language Models via Model Merging}, 
      author={Shitao Xiao and Zheng Liu and Peitian Zhang and Xingrun Xing},
      year={2023},
      eprint={2311.13534},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

@misc{llm_embedder,
      title={Retrieve Anything To Augment Large Language Models}, 
      author={Peitian Zhang and Shitao Xiao and Zheng Liu and Zhicheng Dou and Jian-Yun Nie},
      year={2023},
      eprint={2310.07554},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}

@misc{bge_embedding,
      title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, 
      author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
      year={2023},
      eprint={2309.07597},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

For hard negative mining, you can refer to https://arxiv.org/abs/2104.08051