explodinggradients / ragas

Supercharge Your LLM Application Evaluations 🚀
https://docs.ragas.io
Apache License 2.0
7.38k stars 751 forks source link

How to check my embedding llama3 model is running for evaluate? Is Wrapper necessary? #1540

Open RyanTree-HS opened 1 month ago

RyanTree-HS commented 1 month ago

[ ] I checked the documentation and related resources and couldn't find an answer to my question.

Your Question what is unclear to you? What would you like to know?

For the Following Code: llm = ChatOllama(model="llama3", base_url="http://localhost:11434/") llm = ChatOllama(model="llama3") embedding_model = OllamaEmbeddings(model="llama3")

result = evaluate( dataset=dataset_subset, llm=llm, embeddings=embedding_model, metrics=[ context_precision ], run_config=RunConfig(timeout=180.0, max_workers=16) )

Q1. Ollama run llama 3 is working and I can talk in the chat, so do we need to input the base_url where Ollama 3 is running? Q2. How to verify the embedding model is running as expected? Q3. Is the LangchainLLMWrapper necessary in my case?

llm = LangchainEmbeddingsWrapper(embedding_model) embedding_model = LangchainLLMWrapper(llm) Code Examples This community speaks code. Share your code snippets to help us understand your question better.

Additional context Anything else you want to share with us?

dosubot[bot] commented 1 month ago

To address your questions:

Q1. Is it necessary to input the base_url where Ollama 3 is running?

If your ChatOllama model is functioning correctly without specifying the base_url, then it might not be necessary. However, if your setup requires a specific endpoint for the model to communicate with, you should provide the base_url to ensure proper connectivity.

Q2. How to verify the embedding model is running as expected?

To verify that your embedding model is running correctly, ensure that the evaluate function is set up with the necessary parameters, including the dataset and metrics. The function will handle the evaluation process and return an EvaluationResult object containing the scores of each metric. This will help you confirm that the embedding model is functioning as expected [1].

Q3. Is the LangchainLLMWrapper necessary in my case?

The LangchainLLMWrapper is necessary if you are using a language model from Langchain, as it helps integrate the LLM with the evaluation framework. If you are not using a Langchain language model, then this wrapper is not required [2]. Since you are using OllamaEmbeddings, which is not a Langchain model, the LangchainLLMWrapper is not necessary for your setup.

To continue talking to Dosu, mention @dosu.

jjmachan commented 1 month ago

there seems to be some issues with using ollama models do keep an eye on #1170 also