issues
search
explodinggradients
/
ragas
Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
https://docs.ragas.io
Apache License 2.0
5.2k
stars
471
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
I've been encountering an issue where Ragas generates test cases from the initially uploaded documents, even after updating the documents and rerunning the process. It either uses the old documents or mixes them with the new ones.
#1002
AliHaider0343
opened
20 hours ago
0
fix(testset): for generating testset with new docs
#1001
Manav916
opened
1 day ago
1
Output testset when using Llama 3 8B instruct model is not proper.
#1000
Nandakishore-Thekkadathu
opened
2 days ago
1
Feat/llamaindex: adding llamaindex
#999
jjmachan
closed
2 days ago
0
consider langchain object temperature attr in generation
#998
fealbuqu
opened
3 days ago
0
Reasoning behind Context Precision @ K
#997
ChrisGeishauser
closed
3 days ago
0
Evaluation Error with Gemini/VertexAI: InvalidArgument due to Empty Text Content
#996
baptiste-pasquier
opened
4 days ago
0
Fix counter in Runner and Executor in case of any exception
#995
baptiste-pasquier
opened
4 days ago
0
Handle embeddings for empty strings in AnswerSimilarity class
#994
baptiste-pasquier
opened
4 days ago
0
Using context utilization without ground truth throws error message
#993
dschwalm
opened
5 days ago
0
How to save reason of each evaluation metrics?
#992
JinSeoung-Oh
opened
5 days ago
1
[R-262] ImportError: llama_index must be installed to use this function. Please, install it with `pip install llama_index`.
#991
Prabhjot410
opened
5 days ago
1
Add initial version of summarization metric
#990
sky-2002
opened
1 week ago
3
division by zero when computing output score
#989
theoden8
opened
1 week ago
0
docs: update ragas_custom_model.ipynb
#988
eltociear
closed
6 days ago
1
Additions to community contributed articles
#987
atarora
closed
1 week ago
0
is_async missing in context_relevancy in ragas 0.1.8
#986
abetatos
opened
1 week ago
1
Can this part of the code be applied to Chinese scenarios
#985
w666x
opened
1 week ago
1
test: added end-to-end test for langsmith
#984
jjmachan
closed
1 week ago
0
Contradiction in evaluate is_async parameter docstring and code
#983
dschwalm
closed
4 days ago
3
Improve JSON format prompt for large chunks & Handle ZeroDivisionError
#982
Manav916
opened
1 week ago
1
[R-259] Which is the best LLM for evaluation?
#981
yadavshashank
closed
5 days ago
1
fix: patch type issue in evolution parsing
#980
shahules786
closed
1 week ago
0
Chore: Fix RuntimeError: ... got Future <..> attached to a different loop
#979
abetatos
opened
1 week ago
0
[R-256] make better example dataset for getting started
#978
jjmachan
opened
1 week ago
0
Tried Generation Test Set from Together APIs and Hugging Face Embeddings
#977
Eknathabhiram
opened
1 week ago
2
feat: Ragas CI/CD
#976
jjmachan
closed
1 week ago
0
[R-255] remove `async=True` from ragas Evaluate
#975
jjmachan
opened
1 week ago
0
Improve grammar and punctuation in critique prompts: addresses #971
#974
ruankie
closed
1 week ago
0
feat: adding new defaults to openai Model and embeddings
#973
jjmachan
closed
1 week ago
0
Fix ensembler : handle edge cases
#972
shahules786
closed
1 week ago
0
Grammar and punctuation improvements in critique prompts
#971
ruankie
closed
1 day ago
3
answer-correctness : fix edge cases
#970
shahules786
closed
1 week ago
0
answer-correctness: fix edge cases
#969
shahules786
closed
1 week ago
0
Un-deprecate multiple ground truth answers?
#968
athewsey
opened
2 weeks ago
1
Is it possible to add an argument to the evaluate() function to configure the group name?
#967
zzzmc
opened
2 weeks ago
1
Testset generation ValueError: invalid literal for int() with base 10:
#966
choshiho
opened
2 weeks ago
1
Answer Correctness giving wrong results for batches and single records
#965
aravindpai
opened
2 weeks ago
1
Adapted output keys set(output.keys())={'深度', '相关性', '清晰度', '结构'} do not match with the original output keys: output_keys[i]={'structure', 'clarity', 'depth', 'relevance'}
#964
qism
opened
2 weeks ago
1
TestsetGenerator -> RuntimeError: ... got Future <..> attached to a differen t loop
#963
abetatos
opened
2 weeks ago
3
embedding nodes: 0%| Segmentation fault (core dumped)
#962
WGS-note
opened
2 weeks ago
1
fix typo in f1 score
#961
robuno
opened
2 weeks ago
0
AttributeError: 'PhiForCausalLM' object has no attribute 'generate_prompt'
#960
TheDominus
opened
2 weeks ago
2
[R-253] possible error in answer_correctness prompt
#959
shahules786
closed
1 week ago
1
fix - typo & prompt
#958
omkar-334
opened
2 weeks ago
0
Random RuntimeError: Tool context error detected. This can occur due to parallelization
#957
franck-cussac
opened
2 weeks ago
2
issue with metrics evaluation in case of any exception
#956
mukuls-zeta
opened
2 weeks ago
5
[R-254] Issue in Evaluation using local LLM
#955
sheetalkamthe55
opened
2 weeks ago
1
Ragas llama_index integration as shown doesn't work for custom LLMs
#954
pliablepixels
closed
22 hours ago
1
Update discord invite link
#953
shahules786
closed
2 weeks ago
0
Next