explodinggradients / ragas

Supercharge Your LLM Application Evaluations 🚀
https://docs.ragas.io
Apache License 2.0
7.26k stars 743 forks source link

Adapted output keys set(output.keys())={'深度', '相关性', '清晰度', '结构'} do not match with the original output keys: output_keys[i]={'structure', 'clarity', 'depth', 'relevance'} #964

Closed qism closed 1 month ago

qism commented 6 months ago

[ ] I have checked the documentation and related resources and couldn't resolve my bug.

Describe the bug

>>> generator.adapt(language, evolutions=[simple])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/anaconda3/envs/rags_new/lib/python3.12/site-packages/ragas/testset/generator.py", line 305, in adapt
    evolution.adapt(language, cache_dir=cache_dir)
  File "/opt/anaconda3/envs/rags_new/lib/python3.12/site-packages/ragas/testset/evolutions.py", line 326, in adapt
    super().adapt(language, cache_dir)
  File "/opt/anaconda3/envs/rags_new/lib/python3.12/site-packages/ragas/testset/evolutions.py", line 262, in adapt
    self.node_filter.adapt(language, cache_dir)
  File "/opt/anaconda3/envs/rags_new/lib/python3.12/site-packages/ragas/testset/filters.py", line 69, in adapt
    self.context_scoring_prompt = self.context_scoring_prompt.adapt(
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/anaconda3/envs/rags_new/lib/python3.12/site-packages/ragas/llms/prompt.py", line 241, in adapt
    set(output.keys()) == output_keys[i]
AssertionError: Adapted output keys set(output.keys())={'深度', '相关性', '清晰度', '结构'} do not match with the original output keys: output_keys[i]={'structure', 'clarity', 'depth', 'relevance'}

Ragas version: 0.1.8.dev18+g2d79365 Python version:3.10

Code to Reproduce

from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions import simple, reasoning, multi_context
from langchain_openai import ChatOpenAI, OpenAIEmbeddings

inference_server_url = "http://xxxxxx:port/v1"
openai_api_key = "sk-xxx"

generator_llm  = ChatOpenAI(model="gpt-3.5-turbo-1106",
    openai_api_key=openai_api_key,
    openai_api_base=inference_server_url
)
critic_llm = ChatOpenAI(model="gpt-4-1106-preview",
    openai_api_key=openai_api_key,
    openai_api_base=inference_server_url
)

embeddings = HuggingFaceBgeEmbeddings(
            model_name="BAAI/bge-large-en-v1.5",
            model_kwargs={"device": "cpu"},
            encode_kwargs={"normalize_embeddings": True},
            query_instruction="embedding this sentence",
        )

generator = TestsetGenerator.from_langchain(
    generator_llm,
    critic_llm,
    embeddings
)

from ragas.testset.evolutions import simple, reasoning, multi_context,conditional
language = "Chinese"
generator.adapt(language, evolutions=[simple, reasoning, conditional, multi_context])
generator.save(evolutions=[simple, reasoning, multi_context,conditional])

Error trace

jjmachan commented 5 months ago

Hey @qism were you able to fix it? this was a bug because the adaptation was incorrect, we will fix that shortly from our end. But from your end what you could do is just try running the adaptation again. if that doesn't work I would be more than happy to jump on a call and fix this for you

jimmytanj commented 5 months ago

still exists in ragas 0.1.9

jjmachan commented 3 months ago

tagging #890 fixes, do keep track of that

jjmachan commented 1 month ago

This has been fixed with v0.2 - I know finally 😅 🎉

do checkout the docs here: https://docs.ragas.io/en/stable/howtos/customizations/metrics/_metrics_language_adaptation/ reference here: https://docs.ragas.io/en/stable/references/prompt/#ragas.prompt.PromptMixin

and if you're migrating from v0.1 check out the migration docs here: https://docs.ragas.io/en/stable/howtos/migrations/migrate_from_v01_to_v02

could you check it out and verify if not feel free to comment here and I'll help you out - really sorry again that it tool this while