Open francescofan opened 1 month ago
The issue you're experiencing, where the ground_truth
mirrors the question
for conditional questions, might be due to how the generate_with_langchain_docs
function is handling the generation process.
In the Ragas framework, the conditional
evolution type is supposed to introduce complexity by modifying the question [1]. However, if the ground_truth
is simply copying the question, it suggests that the generation logic might not be correctly implemented or configured for this type.
Here are a few things you could check or try:
Configuration: Ensure that your my_run_config
is correctly set up and doesn't inadvertently cause this behavior. Although specific configurations affecting ground truth weren't found, reviewing your setup might help.
Implementation: Review the implementation of the generate_with_langchain_docs
function to ensure it properly differentiates between generating questions and ground truths for conditional scenarios.
Version: Make sure you are using the latest version of Ragas, as updates might have addressed this issue.
If these steps don't resolve the issue, it might require a deeper dive into the codebase or reaching out to the maintainers for further assistance.
To continue talking to Dosu, mention @dosu.
@dosu 1, Configuration: the bug occurs when no my_run_config is set.
Configuration: When no my_run_config
is set, the generate_with_langchain_docs
function relies on default behavior. It uses query_distribution
to determine scenarios. If query_distribution
is None
, it defaults to default_query_distribution(self.llm)
, which helps differentiate between questions and ground truths [1].
Implementation: The differentiation between generating questions and ground truths is managed by the generate
method. It uses an Executor
to handle scenario and sample generation. The method calculates split values based on query_distribution
probabilities and generates scenarios accordingly. This process inherently differentiates between questions and ground truths by using defined scenarios and their logic [1].
Ragas Version: You are using version 0.1.20, which should support these features. Ensure your setup aligns with the expected configurations and logic.
To continue talking to Dosu, mention @dosu.
[ ] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug question and ground_truth is always the same when generating conditional_question test case. in fact ground_truth copies the question. { "question": "如果生成式AI在计算机视觉方面取得突破,它在自然语言处理(NLP)方面如何进一步发展?", "contexts": [ ... ], "ground_truth": "如果生成式AI在计算机视觉方面取得突破,它在自然语言处理(NLP)方面如何进一步发展?", "evolution_type": "conditional", "metadata": [ { ... } ], "episode_done": true }
Ragas version:0.1.20 Python version:3.11.9 System: Window 10 IDE: Visual Source Code
Code to Reproduce
Error trace
Expected behavior ground_truth should be correct answer not a question repeat.
Additional context Add any other context about the problem here.