Open rchan26 opened 1 year ago
Fine-tuning embeddings typically improves RAG performance. llama-index now has functionality to allow you to do this by adding an adapter on top of the embeddings and fine-tuning those. See this notebook: https://gpt-index.readthedocs.io/en/latest/examples/finetuning/embeddings/finetune_embedding_adapter.html
It also seems to have functionality to generate Q&A pairs. Something to investigate.
Fine-tuning embeddings typically improves RAG performance. llama-index now has functionality to allow you to do this by adding an adapter on top of the embeddings and fine-tuning those. See this notebook: https://gpt-index.readthedocs.io/en/latest/examples/finetuning/embeddings/finetune_embedding_adapter.html
It also seems to have functionality to generate Q&A pairs. Something to investigate.