Closed AarnoStormborn closed 2 months ago
It seems like you've got this under control, if you want help or have specific questions, let me know what I can do for you!
Hey!
The llamapacks are designed in an order that you can download the pack and make your changes and run it!
If you look at this line: https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-rag-evaluator/llama_index/packs/rag_evaluator/base.py#L107
This is where openai embedding model is called. You can replace this with your embed model and run it.
Feature Description
The RAG Evaluator Pack is extremely helpful, but it does not provide an option to use an embedding model of our choice, instead it is forcefully using OpenAIEmbeddings. A wide variety of embedding models come from non-OpenAI sources, so allowing more models to be used can lead to better testing and evaluation.
Reason
No response
Value of Feature
No response