-
First version of this plugin will add a `llm rag` set of commands which can run RAG question answering against embedding collections in LLM.
-
I am using Bedrock for my RAG and faithfullness is NaN most of the times even when context and answer both makes sense. The same problem is also there for the amnesty dataset shared in ragas docs.
…
-
### Enhancement: Integrate Open-Source LLM for Movie Information Retrieval
#### Description
Enhance the existing web crawler to utilize an open-source Language Model (LLM) to fetch and display detai…
-
[x] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
Running `generate_with_langchain_docs` gets stuck, showing:
…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
When use tool set to True every time it response "I apologize, I was unable to find the answer to your question. Is there anything else I can help with?"
-
**Is your feature request related to a problem? Please describe.**
Using for longer sessions will cost a significant amount towards the TTS software
**Describe the solution you'd like**
An altern…
-
I use the code in TensorRT-LLM/examples/baichuan/build.py to compile the Baichuan model with the option of --use_inflight_batching, then I deploy the compiled model using TensorRT-LLM inference servic…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu …
-
https://aclanthology.org/2021.emnlp-main.599/