-
When do you guys plan to release the race evaluation code for rag trec.
-
A notebook for experiment, including:
* [x] comparision with langchain as baseline.
* [x] ablation test for different retrivers, spliter paramters.
-
I am using RAGS evaluate method to do evaluation on 35 Test dataset with groundtruth. Its completing the evaluation and failing at the last step with this error. I have added "raise_exceptions=False" …
-
### Is this a new bug?
- [X] I believe this is a new bug
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
I have a streaming RAG e…
-
-
- [ ] https://github.com/microsoft/TaskWeaver
- [ ] Autogen Assistants
https://github.com/microsoft/autogen/blob/main/notebook/agentchat_graph_modelling_language_using_select_speaker.ipynb
- [ …
-
Fine-tuning:
- 4 models are working on Ollama (3 tinyLlama verisons with 1, 10, 50 epoch)
- I was able to train an Llama2 model (1 epoch only)
- Llama.cpp depricated some functionality which made …
-
### Feature Description
Hi, thanks for this awesome library.
I am trying to benchmark the components of a RAG pipeline until the Retrieval component (chunking, embedding models, rerankers, etc.) o…
-
This is helpful particularly in a use case involving a transcript with small chunk sizes.
Goal: User query/description -> appropriate context from movie script
1. Chunk the document using chunk_…
-
```
Can I propose a few more in style of that youtuber, but not same ones?
1) Coding: Write the game of "pong" in python.
2) Integration: Write a poem about H2O.ai, Wells Fargo, and NVIDIA that hig…