Closed chank20 closed 5 months ago
Am able to replicate -- i'm getting this issue too. Looked into adding my own LLM as well, and still got the same error:
from langchain.chat_models import ChatOpenAI
from ragas.llms import LangchainLLM
from ragas.metrics import (
answer_relevancy,
faithfulness,
context_recall,
context_precision,
)
from ragas import evaluate
gpt4 = ChatOpenAI(model_name="gpt-4")
gpt4_wrapper = LangchainLLM(llm=gpt4)
faithfulness.llm = gpt4_wrapper
context_recall.llm = gpt4_wrapper
answer_relevancy.llm = gpt4_wrapper
context_precision.llm = gpt4_wrapper
results = evaluate(dataset,metrics=[answer_relevancy,faithfulness,context_precision])
OpenAIKeyNotFound Traceback (most recent call last)
Cell In[9], line 22
17 answer_relevancy.llm = gpt4_wrapper
18 context_precision.llm = gpt4_wrapper
---> 22 results = evaluate(dataset,metrics=[answer_relevancy,faithfulness,context_precision])
File ~/Library/Python/3.9/lib/python/site-packages/ragas/evaluation.py:97, in evaluate(dataset, metrics, column_map)
93 validate_column_dtypes(dataset)
95 # run the evaluation on dataset with different metrics
96 # initialize all the models in the metrics
---> 97 [m.init_model() for m in metrics]
99 scores = []
100 binary_metrics = []
File ~/Library/Python/3.9/lib/python/site-packages/ragas/evaluation.py:97, in <listcomp>(.0)
93 validate_column_dtypes(dataset)
95 # run the evaluation on dataset with different metrics
96 # initialize all the models in the metrics
---> 97 [m.init_model() for m in metrics]
99 scores = []
100 binary_metrics = []
File ~/Library/Python/3.9/lib/python/site-packages/ragas/metrics/answer_relevance.py:70, in AnswerRelevancy.init_model(self)
68 if isinstance(self.embeddings, OpenAIEmbeddings):
69 if self.embeddings.openai_api_key == "no-key":
---> 70 raise OpenAIKeyNotFound
OpenAIKeyNotFound: OpenAI API key not found! Seems like your trying to use Ragas metrics with OpenAI endpoints. Please set 'OPENAI_API_KEY' environment variable
@RoboTums @chank20 could you guys update the ragas version to the latest and test? This should be fixed with v0.0.20
but one thing to note is that by default the the metric instances will be instantiated at import time so it checks for env_var at import time . I'll change this so that it also checks and loads at evaluation time
I updated ragas from v0.0.19 --> v0.0.20. This caused "ImportError: cannot import name 'AzureOpenAIEmbeddings' from 'langchain.embeddings'" so I updated langchain from v0.0.324--> v0.0.78.
Now the code errors out one line earlier, and the faithfulness chain doesn't work
Error Trace:
Traceback (most recent call last):
File "[blah]/src/testing.py", line 19, in <module>
faithfulness_chain = RagasEvaluatorChain(metric=faithfulness)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/genai-poc/lib/python3.11/site-packages/ragas/langchain/evalchain.py", line 29, in __init__
self.metric.init_model()
File "/opt/homebrew/anaconda3/envs/genai-poc/lib/python3.11/site-packages/ragas/metrics/base.py", line 121, in init_model
self.llm.validate_api_key()
File "/opt/homebrew/anaconda3/envs/genai-poc/lib/python3.11/site-packages/ragas/llms/openai.py", line 119, in validate_api_key
raise OpenAIKeyNotFound
ragas.exceptions.OpenAIKeyNotFound: OpenAI API key not found! Seems like your trying to use Ragas metrics with OpenAI endpoints. Please set 'OPENAI_API_KEY' environment variable
I am facing the same problem as chank20 (I am using Python 3.10.5)
Version number || error 0.0.20 || OpenAIKeyNotFound: OpenAI API key not found! Seems like your trying to use Ragas metrics with OpenAI endpoints. 0.0.19 || OpenAIKeyNotFound: OpenAI API key not found! Seems like your trying to use Ragas metrics with OpenAI endpoints. 0.0.18 || ModuleNotFoundError: No module named 'llama_index' --> can be solved, simply by installing new version of llama-index
So the newest stable version, that works for me is 0.0.17 (although, by installing llama-index, 0.0.18 works as well). Thanks a lot for fixing the problem, your library is great!!!
The evaluate method throws same error for me too, even for just context_precision and context_recall. I have the env variable set. Is there any short-term fix?
llama-index==0.8.69.post1
ragas==0.0.20
I also have the same error with
Python: 3.10.5 ragas: 0.0.20 openai: 1.3.3 langchain: 0.0.337
I would also be keen for a short-term fix.
I'm currently utilizing the RAGAS metrics in conjunction with an open-source language model (phind-codellama). I've successfully implemented the faithfulness and context_relevancy metrics. However, I'm encountering an issue with the answer_relevancy metric, as it consistently produces the same OpenAIKeyNotFound error despite setting answer_relevancy.llm = llm_wrapper
. Any insights or assistance with resolving this matter would be greatly appreciated.
from ragas.metrics import faithfulness, context_relevancy, answer_relevancy
from ragas.langchain import RagasEvaluatorChain
from ragas.metrics.critique import harmfulness
faithfulness.llm = llm_wrapper
context_relevancy.llm = llm_wrapper
harmfulness.llm = llm_wrapper
answer_relevancy.llm = llm_wrapper
eval_chain = RagasEvaluatorChain(metric=answer_relevancy)
How can I solve that problem ?
Describe the bug OpenAI API Key is not found by answer relevancy metric, but works for all other metrics? Environment variable "OPENAI_API_KEY" is set using dotenv.
Ragas version: 0.0.19 Python version: 3.11.3
Code to Reproduce
Error trace
Expected behavior OpenAI API Key is confirmed to be loaded as an environment variable using dotenv. RagasEvaluatorChain works with all other metrics besides answer relevancy.