-
ref
* [Handle large context windows using Ollama's LLMs for evaluation purpose · Issue #1120 · explodinggradients/ragas](https://github.com/explodinggradients/ragas/issues/1120)
feats
* check how g…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
Setup
Python version 3.11
Windows Machine
pip install ragchecker
python -m spacy download en_core_web_sm
Its seems like there is trouble connecting with Azure OpenAI or utilising it. I used the…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I designed a chatbot with an Agent to perform a series of actions.
My agent works like…
-
I am interested in using Uptrain with a locally hosted open-source LLM as evaluator LLM. I'm currently hosting a LLM service using vLLM, not with Ollama. Is there any way to use this local LLM with U…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
When evaluating a RAG retrieval service using the llama-index evaluation method, I encou…
-
# URL
- https://arxiv.org/pdf/2408.02666
# Affiliations
- Tianlu Wang, N/A
- Ilia Kulikov, N/A
- Olga Golovneva, N/A
- Ping Yu, N/A
- Weizhe Yuan, N/A
- Jane Dwivedi-Yu, N/A
- Richard Yu…
-
Hi! I am wondering if it's possible to use open source or self-deployed llms (and not only open ai) as the judge or evaluator? If yes, could you please refer to an example or part of the docs explaini…
-
# Evaluating the Effectiveness of LLM-Evaluators (aka LLM-as-Judge)
Use cases, techniques, alignment, finetuning, and critiques against LLM-evaluators.
[https://eugeneyan.com/writing/llm-evaluators/…
-
### My current environment
````text
[pip3] numpy==2.1.1
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-r…