explodinggradients / ragas

Supercharge Your LLM Application Evaluations πŸš€
https://docs.ragas.io
Apache License 2.0
7.31k stars 745 forks source link

Models via ChatOllama raise ConnectError() #1478

Open Sohammhatre10 opened 1 month ago

Sohammhatre10 commented 1 month ago

For this code section using ChatMistralAI and MistralAIEmbeddings

from langchain_ollama.chat_models import ChatOllama
from langchain_ollama.embeddings import OllamaEmbeddings
import ragas
from ragas import evaluate
from ragas.metrics import answer_relevancy
from ragas.llms import LangchainLLMWrapper
from ragas.embeddings import LangchainEmbeddingsWrapper
from datasets import Dataset
import json

# The model should be specified using the `model` parameter
req_llm = ChatOllama(model="mistral")
wrapper = LangchainLLMWrapper(req_llm)
embeddings = OllamaEmbeddings(model="mistral")
wrapper_embedding = LangchainEmbeddingsWrapper(embeddings)

metrics=[answer_relevancy]

for m in metrics:
    m.__setattr__("llm", wrapper)
    if hasattr(m, "embeddings"):
      m.__setattr__("embeddings", wrapper_embedding)

# Step 3: Run the evaluation
results = evaluate(
    dataset=amnesty_qa["eval"].select(range(2)),
    metrics=[answer_relevancy],
    llm=wrapper,
    embeddings=wrapper_embedding,
)

# Step 4: Print the results
print(json.dumps(results, indent=3))

The error here is

Evaluating: 100%
 2/2 [02:08<00:00, 53.55s/it]
ERROR:ragas.executor:Exception raised in Job[0]: ConnectError(All connection attempts failed)
ERROR:ragas.executor:Exception raised in Job[1]: ConnectError(All connection attempts failed)
{
   "answer_relevancy": NaN
}

Will anyone please tell me a way around here using open-source llms (I don't have access to the OpenAI API). Any suggestions or solutions are appreiciated as its urgent

dosubot[bot] commented 1 month ago

I couldn't find a specific solution for the ConnectError with ChatOllama and MistralAIEmbeddings in the repository. However, here are some general steps you can take to troubleshoot and potentially resolve the issue:

  1. Check Network Connection: Ensure that your network connection is stable and that there are no firewall or proxy settings blocking the connection.

  2. Verify Model Configuration: Double-check that the model names and parameters are correctly specified. Ensure that the models are available and correctly configured in your environment.

  3. Dependencies: Make sure all necessary dependencies are installed and up to date. Sometimes, connection issues can arise from version mismatches.

  4. Local Setup: If you're using local models, ensure they are properly set up and running. Check any logs or error messages from the model server.

  5. Debugging: Enable debugging in your script to get more detailed error messages. This might provide more insight into where the connection is failing.

  6. Alternative Models: If possible, try using a different open-source model to see if the issue persists. This can help determine if the problem is specific to the Mistral model.

If these steps don't resolve the issue, consider checking any related closed issues or discussions for similar problems, such as the VLLM RAGAS API key error for HuggingFace Models, which might provide additional context or solutions.

To continue talking to Dosu, mention @dosu.