-
Hi I want to ask I was able to run yourm and vectorrm however it only works separately. Is there something I'm missing out on or a way to use two retrievers or more on the same run ??
-
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of [ColPali](https://huggingface.co/vidore/colpali). I …
-
Hey!
Great job on Arena, I think in the era of saturated benchmarks, having an actual large-number vibes-based evaluation is very important.
I was wondering, would you entertain adding models th…
-
- [ ] [Vespa 🤝 ColPali: Efficient Document Retrieval with Vision Language Models — pyvespa documentation](https://pyvespa.readthedocs.io/en/latest/examples/colpali-document-retrieval-vision-language-m…
-
[Next-Gen Large Language Models: The Retrieval-Augmented Generation (RAG) Handbook](https://www.freecodecamp.org/news/retrieval-augmented-generation-rag-handbook/)
-
I noticed that for no matter what models used for vualt QA embedding, the default context length/chunk size is 2048 tokens, which may lead to a reduction of retrieval performance as any note longer th…
-
python run_baseline_refactor.py
error:
**python: can't open file 'run_baseline_refactor.py': [Errno 2] No such file or directory**
This python file doesn't exist, I think it's still run_baseline_lm…
-
**Describe the bug**
I recently upgraded to version v0.3.0 of taskingai, but I found that the models and Retrieval I created before are still there (visible in the console), yet no assistants are d…
-
We're adding RAR-b with & without instructions as two leaderboard tabs under Retrieval with @gowitheflow-1998. Naming-wise it is confusing to have these and also the `Retrieval w/ Instructions` tab. I…
-
I am exploring the development of a Retrieval-Augmented Generation (RAG) application for Android and am considering using local language models from Hugging Face’s TFLite models. I am looking for guid…