-
when I use the actual ollama model I get a complete response, but the assitance is cutting it off after one line or something and I want to know why.
-
#My code is
from datasets import Dataset
from ragas.llms import LangchainLLMWrapper
from langchain_community.embeddings import SparkLLMTextEmbeddings, HuggingFaceEmbeddings
#from langchain_commu…
-
Here is my code:
```
import typing as t
import asyncio
from typing import List
from datasets import load_dataset, load_from_disk
from ragas.metrics import faithfulness, context_recall, context_p…
-
### Description
I'm currently working on a project where I'm using Crew AI agents with chunking and context retention. As part of this process, I'm attempting to implement long-term memory for the ag…
-
My code:
```
import typing as t
import asyncio
from typing import List
from datasets import load_dataset, load_from_disk
from ragas.metrics import faithfulness, context_recall, context_precisi…
-
### Publisher
ACM TIST (ACM Transactions on Intelligent Systems and Technology)
### Link to The Paper
https://dl.acm.org/doi/pdf/10.1145/3641289
### Name of The Authors
Yupeng Chang, Xu Wang, Jin…
-
**Describe the Feature**
In some cases I think it would be interesting to generate a testdataset variable from a json file or dictionary with the necessary keys (question, answer, context...) in orde…
-
## Is your feature request related to a problem? Please describe.
Garak and CyberSec have insecure code generation detectors. As I understand it, that means they have a scorer LLM or some sort of sta…
-
Title.
Benchmarks:
Summarization
- [x] G-Eval
- [ ] SummHay - https://arxiv.org/abs/2407.01370v1 & https://github.com/salesforce/summary-of-a-haystack
- https://arxiv.org/html/2403.19889v1
R…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I want to evaluate the precision and recall of my RAG Application build on llama index.I…