run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.5k stars 5.01k forks source link

[Feature Request]: No cutomization for embedding model in RAG Evaluator pack #12011

Closed AarnoStormborn closed 2 months ago

AarnoStormborn commented 6 months ago

Feature Description

The RAG Evaluator Pack is extremely helpful, but it does not provide an option to use an embedding model of our choice, instead it is forcefully using OpenAIEmbeddings. A wide variety of embedding models come from non-OpenAI sources, so allowing more models to be used can lead to better testing and evaluation.

Reason

No response

Value of Feature

No response

dosubot[bot] commented 6 months ago

It seems like you've got this under control, if you want help or have specific questions, let me know what I can do for you!

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
anoopshrma commented 6 months ago

Hey!

The llamapacks are designed in an order that you can download the pack and make your changes and run it!

If you look at this line: https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-rag-evaluator/llama_index/packs/rag_evaluator/base.py#L107

This is where openai embedding model is called. You can replace this with your embed model and run it.